[jira] [Commented] (HIVE-14261) Support set/unset partition parameters

2021-12-01 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451692#comment-17451692
 ] 

xiepengjie commented on HIVE-14261:
---

Yes, you'r right. This pr is just about enriching the syntax of HS2. For other 
example, we want to have a life cycle management system about the partition, so 
we need to set different parameters for different partition.

> Support set/unset partition parameters
> --
>
> Key: HIVE-14261
> URL: https://issues.apache.org/jira/browse/HIVE-14261
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Major
> Attachments: HIVE-14261.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HIVE-11819) HiveServer2 catches OOMs on request threads

2021-11-29 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17450826#comment-17450826
 ] 

xiepengjie commented on HIVE-11819:
---

[~zabetak] , hi good friend, would you like to discuss this issue ?

> HiveServer2 catches OOMs on request threads
> ---
>
> Key: HIVE-11819
> URL: https://issues.apache.org/jira/browse/HIVE-11819
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HIVE-11819.01.patch, HIVE-11819.02.patch, 
> HIVE-11819.patch
>
>
> ThriftCLIService methods such as ExecuteStatement are apparently capable of 
> catching OOMs because they get wrapped in RTE by HiveSessionProxy. 
> This shouldn't happen.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HIVE-14261) Support set/unset partition parameters

2021-11-29 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17450823#comment-17450823
 ] 

xiepengjie commented on HIVE-14261:
---

[~zabetak] , I am very happy to discuss this issue with you , i have closed 
HIVE-25739 . For this issue , if we worried about some bad case, maybe we can 
set partition's parameters by super user/some special users. But i think we 
don't need to worried about it, because  user can still setting it with 
following code, unless hms disabled it.
{code:java}
HiveConf hiveConf = new HiveConf();    
HiveMetaStoreClient hmsc = new HiveMetaStoreClient(hiveConf);
Partition partition = hmsc.getPartition("default", "test", "2021-11-29");
Map parameters = partition.getParameters();
parameters.put("newKey", "newValue");
hmsc.alter_partition("db", "tableName", partition); {code}
 

> Support set/unset partition parameters
> --
>
> Key: HIVE-14261
> URL: https://issues.apache.org/jira/browse/HIVE-14261
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Major
> Attachments: HIVE-14261.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HIVE-25739) Support Alter Partition Properties

2021-11-29 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17450369#comment-17450369
 ] 

xiepengjie commented on HIVE-25739:
---

Yeer,as you said. Maybe we are afraid of user adding large number of KVs. The 
fact is that more and more companies use the hms as a unified metadata 
management system, which means the stored data not only hive's table, but also 
flink's table, or kafka's topic,etc. All of them need special parameters for 
the partition. Now we set the partition's parameters through the following code:

 
{code:java}
HiveConf hiveConf = new HiveConf();    
HiveMetaStoreClient hmsc = new HiveMetaStoreClient(hiveConf);
Partition partition = hmsc.getPartition("default", "test", "2021-11-29");
Map parameters = partition.getParameters();
parameters.put("newKey", "newValue");
hmsc.alter_partition("db", "tableName", partition);{code}
so, i think our restriction is in vain,and support this feature is more useful.

 

> Support Alter Partition Properties
> --
>
> Key: HIVE-25739
> URL: https://issues.apache.org/jira/browse/HIVE-25739
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.3.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Support alter partition properties like:}}{}}}
> {code:java}
> alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
> ('a'='1', 'c'='3');
> alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
> exists ('c'='3');{code}
>  
> relates to https://issues.apache.org/jira/browse/HIVE-14261



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25739) Support Alter Partition Properties

2021-11-25 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-25739:
--
Description: 
Support alter partition properties like:}}{}}}
{code:java}
alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
('a'='1', 'c'='3');
alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
exists ('c'='3');{code}
 

relates to https://issues.apache.org/jira/browse/HIVE-14261

  was:
Support alter partition properties like:}}{}}}
{code:java}
alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
('a'='1', 'c'='3');
alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
exists ('c'='3');{code}
 


> Support Alter Partition Properties
> --
>
> Key: HIVE-25739
> URL: https://issues.apache.org/jira/browse/HIVE-25739
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.3.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support alter partition properties like:}}{}}}
> {code:java}
> alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
> ('a'='1', 'c'='3');
> alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
> exists ('c'='3');{code}
>  
> relates to https://issues.apache.org/jira/browse/HIVE-14261



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (HIVE-25739) Support Alter Partition Properties

2021-11-25 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-25739:
--
Description: 
Support alter partition properties like:}}{}}}
{code:java}
alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
('a'='1', 'c'='3');
alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
exists ('c'='3');{code}
 

  was:
Support alter partition properties like:{{{}{}}}
{code:java}
alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
('a'='1', 'c'='3');
alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
exists ('c'='3');{code}
{{}}


> Support Alter Partition Properties
> --
>
> Key: HIVE-25739
> URL: https://issues.apache.org/jira/browse/HIVE-25739
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.3.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support alter partition properties like:}}{}}}
> {code:java}
> alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
> ('a'='1', 'c'='3');
> alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
> exists ('c'='3');{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (HIVE-25739) Support Alter Partition Properties

2021-11-25 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-25739:
-


> Support Alter Partition Properties
> --
>
> Key: HIVE-25739
> URL: https://issues.apache.org/jira/browse/HIVE-25739
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Fix For: 2.3.8
>
>
> Support alter partition properties like:{{{}{}}}
> {code:java}
> alter table alter1 partition(insertdate='2008-01-01') set tblproperties 
> ('a'='1', 'c'='3');
> alter table alter1 partition(insertdate='2008-01-01') unset tblproperties if 
> exists ('c'='3');{code}
> {{}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (HIVE-11819) HiveServer2 catches OOMs on request threads

2021-10-31 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436617#comment-17436617
 ] 

xiepengjie edited comment on HIVE-11819 at 11/1/21, 5:39 AM:
-

The report method in FutureTask wrapped all EXCEPTIONAL(value is 3) state as 
ExecutionException.
{code:java}
private V report(int s) throws ExecutionException {
Object x = outcome;
if (s == NORMAL)
return (V)x;
if (s >= CANCELLED)
throw new CancellationException();
throw new ExecutionException((Throwable)x);
}
{code}
So, such as the below code in the methed 
org.jeff.juc.ThreadPoolExecutorWithOomHook#afterExecute
{code:java}
if (t instanceof OutOfMemoryError) {
  oomHook.run();
}
{code}
this shouldn't happen.

We can fix it like this,
{code:java}
if (t instanceof OutOfMemoryError || t.getCause() instanceof OutOfMemoryError) {
oomHook.run();
}
{code}
otherwise the current fix will be invalided.

 

[~sershe] [~vgumashta] ,can you take a look?


was (Author: xiepengjie):
The report method in FutureTask wrapped all EXCEPTIONAL(value is 3) state as 
ExecutionException.
{code:java}
private V report(int s) throws ExecutionException {
Object x = outcome;
if (s == NORMAL)
return (V)x;
if (s >= CANCELLED)
throw new CancellationException();
throw new ExecutionException((Throwable)x);
}
{code}
So, such as the below code in the methed 
org.jeff.juc.ThreadPoolExecutorWithOomHook#afterExecute
{code:java}
if (t instanceof OutOfMemoryError) {
  oomHook.run();
}
{code}
this shouldn't happen.

We can fix it like this,
{code:java}
if (t instanceof OutOfMemoryError || t.getCause() instanceof OutOfMemoryError) {
oomHook.run();
}
{code}
otherwise this fix will be invalided.

 

[~sershe] [~vgumashta] ,can you take a look?

> HiveServer2 catches OOMs on request threads
> ---
>
> Key: HIVE-11819
> URL: https://issues.apache.org/jira/browse/HIVE-11819
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HIVE-11819.01.patch, HIVE-11819.02.patch, 
> HIVE-11819.patch
>
>
> ThriftCLIService methods such as ExecuteStatement are apparently capable of 
> catching OOMs because they get wrapped in RTE by HiveSessionProxy. 
> This shouldn't happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-11819) HiveServer2 catches OOMs on request threads

2021-10-31 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436617#comment-17436617
 ] 

xiepengjie commented on HIVE-11819:
---

The report method in FutureTask wrapped all EXCEPTIONAL(value is 3) state as 
ExecutionException.
{code:java}
private V report(int s) throws ExecutionException {
Object x = outcome;
if (s == NORMAL)
return (V)x;
if (s >= CANCELLED)
throw new CancellationException();
throw new ExecutionException((Throwable)x);
}
{code}
So, such as the below code in the methed 
org.jeff.juc.ThreadPoolExecutorWithOomHook#afterExecute
{code:java}
if (t instanceof OutOfMemoryError) {
  oomHook.run();
}
{code}
this shouldn't happen.

We can fix it like this,
{code:java}
if (t instanceof OutOfMemoryError || t.getCause() instanceof OutOfMemoryError) {
oomHook.run();
}
{code}
otherwise this fix will be invalided.

 

[~sershe] [~vgumashta] ,can you take a look?

> HiveServer2 catches OOMs on request threads
> ---
>
> Key: HIVE-11819
> URL: https://issues.apache.org/jira/browse/HIVE-11819
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: HIVE-11819.01.patch, HIVE-11819.02.patch, 
> HIVE-11819.patch
>
>
> ThriftCLIService methods such as ExecuteStatement are apparently capable of 
> catching OOMs because they get wrapped in RTE by HiveSessionProxy. 
> This shouldn't happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HIVE-24959) Hive JDBC throws java.net.SocketTimeoutException: Read timed out

2021-03-31 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie resolved HIVE-24959.
---
Release Note: 
duplicates
https://issues.apache.org/jira/browse/HIVE-12371
  Resolution: Fixed

> Hive JDBC throws  java.net.SocketTimeoutException: Read timed out
> -
>
> Key: HIVE-24959
> URL: https://issues.apache.org/jira/browse/HIVE-24959
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> In the hive-jdbc client side, timeout comes from 
> DriverManager.getLoginTimeout(), but the timeout is global parameter like 
> this:
> {code:java}
> public class DriverManager {
> ...
> private static volatile int loginTimeout = 0;
> ...
> public static void setLoginTimeout(int seconds) {
> loginTimeout = seconds;
> }
> ...
> public static int getLoginTimeout() {
> return (loginTimeout);
> }
> {code}
> when using different jdbc in the same jvm, for example: mysql-jdbc setup 
> timeout 10, but hive-jdbc should be 0, it will affect each other. so, we 
> should allowed user setupTimeout in HiveConnection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24959) Hive JDBC throws java.net.SocketTimeoutException: Read timed out

2021-03-31 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17312168#comment-17312168
 ] 

xiepengjie commented on HIVE-24959:
---

duplicates

https://issues.apache.org/jira/browse/HIVE-12371

 

> Hive JDBC throws  java.net.SocketTimeoutException: Read timed out
> -
>
> Key: HIVE-24959
> URL: https://issues.apache.org/jira/browse/HIVE-24959
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> In the hive-jdbc client side, timeout comes from 
> DriverManager.getLoginTimeout(), but the timeout is global parameter like 
> this:
> {code:java}
> public class DriverManager {
> ...
> private static volatile int loginTimeout = 0;
> ...
> public static void setLoginTimeout(int seconds) {
> loginTimeout = seconds;
> }
> ...
> public static int getLoginTimeout() {
> return (loginTimeout);
> }
> {code}
> when using different jdbc in the same jvm, for example: mysql-jdbc setup 
> timeout 10, but hive-jdbc should be 0, it will affect each other. so, we 
> should allowed user setupTimeout in HiveConnection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24959) Hive JDBC throws java.net.SocketTimeoutException: Read timed out

2021-03-31 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-24959:
--
Description: 
In the hive-jdbc client side, timeout comes from 
DriverManager.getLoginTimeout(), but the timeout is global parameter like this:
{code:java}
public class DriverManager {
...
private static volatile int loginTimeout = 0;
...
public static void setLoginTimeout(int seconds) {
loginTimeout = seconds;
}
...
public static int getLoginTimeout() {
return (loginTimeout);
}
{code}
when using different jdbc in the same jvm, for example: mysql-jdbc setup 
timeout 10, but hive-jdbc should be 0, it will affect each other. so, we should 
allowed user setupTimeout in HiveConnection.

> Hive JDBC throws  java.net.SocketTimeoutException: Read timed out
> -
>
> Key: HIVE-24959
> URL: https://issues.apache.org/jira/browse/HIVE-24959
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> In the hive-jdbc client side, timeout comes from 
> DriverManager.getLoginTimeout(), but the timeout is global parameter like 
> this:
> {code:java}
> public class DriverManager {
> ...
> private static volatile int loginTimeout = 0;
> ...
> public static void setLoginTimeout(int seconds) {
> loginTimeout = seconds;
> }
> ...
> public static int getLoginTimeout() {
> return (loginTimeout);
> }
> {code}
> when using different jdbc in the same jvm, for example: mysql-jdbc setup 
> timeout 10, but hive-jdbc should be 0, it will affect each other. so, we 
> should allowed user setupTimeout in HiveConnection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-24959) Hive JDBC throws java.net.SocketTimeoutException: Read timed out

2021-03-31 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-24959:
-


> Hive JDBC throws  java.net.SocketTimeoutException: Read timed out
> -
>
> Key: HIVE-24959
> URL: https://issues.apache.org/jira/browse/HIVE-24959
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24573) hive 3.1.2 drop table Sometimes it can't be deleted

2021-03-17 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303540#comment-17303540
 ] 

xiepengjie commented on HIVE-24573:
---

What is your database name? hive? hive.dc_usermanage or dc_usermanage?

> hive 3.1.2 drop table Sometimes it can't be deleted
> ---
>
> Key: HIVE-24573
> URL: https://issues.apache.org/jira/browse/HIVE-24573
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: paul
>Priority: Blocker
>
> Execute drop table if exists trade_ 4_ Temp448 statement, the table cannot be 
> deleted; hive.log  The log shows 
>   2020-12-29T07:30:04,840 ERROR [HiveServer2-Background-Pool: Thread-6483] 
> metadata.Hive: Table dc_usermanage.trade_3_temp448 not found: 
> hive.dc_usermanage.trade_3_temp448 table not found
>  
> Statement returns success
>  
> I doubt that this problem will only arise under the condition of high-level 
> merger. We run a lot of tasks every day, one or two tasks every day, which 
> will happen
>  
> metastore  mysql
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24749) Disable user's UDF use SystemExit

2021-03-03 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17294635#comment-17294635
 ] 

xiepengjie commented on HIVE-24749:
---

[~okumin] , thanks for you to take a look, maybe we can fix it like this: 
[https://www.javacodegeeks.com/2013/11/preventing-system-exit-calls.html]

> Disable user's UDF use SystemExit
> -
>
> Key: HIVE-24749
> URL: https://issues.apache.org/jira/browse/HIVE-24749
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If the System.exit() is executed in the user's UDF and using default 
> SecurityManager, it will cause the HS2 service process to exit, that's too 
> bad.
> It is safer to use NoExitSecurityManager which can intercepting System.exit().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-24749) Disable user's UDF use SystemExit

2021-02-07 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-24749:
-

  Component/s: HiveServer2
Affects Version/s: All Versions
 Assignee: xiepengjie
  Description: 
If the System.exit() is executed in the user's UDF and using default 
SecurityManager, it will cause the HS2 service process to exit, that's too bad.

It is safer to use NoExitSecurityManager which can intercepting System.exit().

> Disable user's UDF use SystemExit
> -
>
> Key: HIVE-24749
> URL: https://issues.apache.org/jira/browse/HIVE-24749
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: All Versions
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> If the System.exit() is executed in the user's UDF and using default 
> SecurityManager, it will cause the HS2 service process to exit, that's too 
> bad.
> It is safer to use NoExitSecurityManager which can intercepting System.exit().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24749) Disable user's UDF use SystemExit

2021-02-07 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-24749:
--
Summary: Disable user's UDF use SystemExit  (was: Disable user's UDF)

> Disable user's UDF use SystemExit
> -
>
> Key: HIVE-24749
> URL: https://issues.apache.org/jira/browse/HIVE-24749
> Project: Hive
>  Issue Type: Bug
>Reporter: xiepengjie
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24132) Metastore client doesn't close connection properly

2020-09-08 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-24132:
--
Description: 
While closing metastore client connection, sometimes throws warning log with 
following trace. 
{code:java}
2020-09-09 10:56:14,408 WARN org.apache.thrift.transport.TIOStreamTransport: 
Error closing output stream.
java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:116)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at 
org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
at org.apache.thrift.transport.TSocket.close(TSocket.java:235)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:506)
at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
at com.sun.proxy.$Proxy6.close(Unknown Source)
at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:1992)
at com.sun.proxy.$Proxy6.close(Unknown Source)
at org.apache.hadoop.hive.ql.metadata.Hive.close(Hive.java:320)
at org.apache.hadoop.hive.ql.metadata.Hive.access$000(Hive.java:143)
at org.apache.hadoop.hive.ql.metadata.Hive$1.remove(Hive.java:167)
at org.apache.hadoop.hive.ql.metadata.Hive.closeCurrent(Hive.java:288)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.close(HiveSessionImpl.java:616)
at 
org.apache.hive.service.cli.session.HiveSessionImplwithUGI.close(HiveSessionImplwithUGI.java:93)
at sun.reflect.GeneratedMethodAccessor117.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
at com.sun.proxy.$Proxy19.close(Unknown Source)
at 
org.apache.hive.service.cli.session.SessionManager.closeSession(SessionManager.java:300)
at 
org.apache.hive.service.cli.CLIService.closeSession(CLIService.java:237)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.CloseSession(ThriftCLIService.java:464)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession.getResult(TCLIService.java:1273)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$CloseSession.getResult(TCLIService.java:1258)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:57)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

> Metastore client doesn't close connection properly
> --
>
> Key: HIVE-24132
> URL: https://issues.apache.org/jira/browse/HIVE-24132
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1
>Reporter: xiepengjie
>Priority: Major
>
> While closing metastore client connection, sometimes throws warning log with 
> following trace. 
> {code:java}
> 2020-09-09 10:56:14,

[jira] [Commented] (HIVE-22344) I can't run hive in command line

2020-07-14 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157449#comment-17157449
 ] 

xiepengjie commented on HIVE-22344:
---

Because the versions of guava in Hadoop 3.1.2 is different from hive's.

> I can't run hive in command line
> 
>
> Key: HIVE-22344
> URL: https://issues.apache.org/jira/browse/HIVE-22344
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 3.1.2
> Environment: hive: 3.1.2
> hadoop 3.2.1
>  
>Reporter: Smith Cruise
>Priority: Blocker
>
> I can't run hive in command. It tell me :
> {code:java}
> [hadoop@master lib]$ hive
> which: no hbase in 
> (/home/hadoop/apache-hive-3.1.2-bin/bin:{{pwd}}/bin:/home/hadoop/.local/bin:/home/hadoop/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/home/hadoop/apache-hive-3.1.2-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/home/hadoop/hadoop3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
> at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)
> at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:448)
> at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141)
> at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:5099)
> at 
> org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:97)
> at 
> org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> {code}
> I don't know what's wrong about it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-22668) ClassNotFoundException:HiveHBaseTableInputFormat when tez include reduce operation

2020-07-14 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157443#comment-17157443
 ] 

xiepengjie edited comment on HIVE-22668 at 7/14/20, 3:03 PM:
-

Your jar are stored locally, you should upload it to remote space like hdfs, 
then, you can add jar like this:

```

add jar hdfs:///usr/hdp/hive-hbase-handler-3.1.0.3.1.4.0-315.jar

```

Beeline and jar are on your local machine, but the thrift server of hiveserver2 
on the remote machine, hs2 can not find the path 
'/usr/hdp/3.1.4.0-315/hive/lib/...'

 

If it does't work, maybe you should add the jar in HIVE_AUX_JARS_PATH and 
restart hiveserver2:

```

export HIVE_AUX_JARS_PATH='/hive/aux/jar/path/a.jar,/hive/aux/jar/path/b.jar'

```


was (Author: xiepengjie):
Your jar are stored locally, you should upload it to remote space like hdfs, 
then, you can add jar like this:

```

add jar hdfs:///usr/hdp/hive-hbase-handler-3.1.0.3.1.4.0-315.jar

```

Beeline and jar are on your local machine, but the thrift server of hiveserver2 
on the remote machine, hs2 can not find the path 
'/usr/hdp/3.1.4.0-315/hive/lib/...'

 

If it does't work, maybe you should add the add in HIVE_AUX_JARS_PATH and 
restart hiveserver2:

```

export HIVE_AUX_JARS_PATH='/hive/aux/jar/path/a.jar,/hive/aux/jar/path/b.jar'

```

> ClassNotFoundException:HiveHBaseTableInputFormat when tez include reduce 
> operation
> --
>
> Key: HIVE-22668
> URL: https://issues.apache.org/jira/browse/HIVE-22668
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, Hive
>Affects Versions: 3.1.0
>Reporter: Michael
>Priority: Blocker
>
> When I use beeline to execute script which the operation is insert data from 
> hive to hbase.
> If the operation include reduce step, this exception will appearance.
> I try to add jar in beeline like this:
> {code:java}
> ADD JAR /usr/hdp/3.1.4.0-315/hive/lib/hive-hbase-handler-3.1.0.3.1.4.0-315.jar
> ADD JAR /usr/hdp/3.1.4.0-315/hive/lib/guava-28.0-jre.jar
> ADD JAR /usr/hdp/3.1.4.0-315/hive/lib/zookeeper-3.4.6.3.1.4.0-315.jar{code}
> but this problem always exist. 
> {code:java}
> Serialization trace:
> inputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)
> tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)
> conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator)
> reducer (org.apache.hadoop.hive.ql.plan.ReduceWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:185)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:326)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:314)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:759)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObjectOrNull(SerializationUtilities.java:203)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:218)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:218)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
> at 
> org.apache.hadoop.hive.ql.exec.Se

[jira] [Commented] (HIVE-22668) ClassNotFoundException:HiveHBaseTableInputFormat when tez include reduce operation

2020-07-14 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157443#comment-17157443
 ] 

xiepengjie commented on HIVE-22668:
---

Your jar are stored locally, you should upload it to remote space like hdfs, 
then, you can add jar like this:

```

add jar hdfs:///usr/hdp/hive-hbase-handler-3.1.0.3.1.4.0-315.jar

```

Beeline and jar are on your local machine, but the thrift server of hiveserver2 
on the remote machine, hs2 can not find the path 
'/usr/hdp/3.1.4.0-315/hive/lib/...'

 

If it does't work, maybe you should add the add in HIVE_AUX_JARS_PATH and 
restart hiveserver2:

```

export HIVE_AUX_JARS_PATH='/hive/aux/jar/path/a.jar,/hive/aux/jar/path/b.jar'

```

> ClassNotFoundException:HiveHBaseTableInputFormat when tez include reduce 
> operation
> --
>
> Key: HIVE-22668
> URL: https://issues.apache.org/jira/browse/HIVE-22668
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, Hive
>Affects Versions: 3.1.0
>Reporter: Michael
>Priority: Blocker
>
> When I use beeline to execute script which the operation is insert data from 
> hive to hbase.
> If the operation include reduce step, this exception will appearance.
> I try to add jar in beeline like this:
> {code:java}
> ADD JAR /usr/hdp/3.1.4.0-315/hive/lib/hive-hbase-handler-3.1.0.3.1.4.0-315.jar
> ADD JAR /usr/hdp/3.1.4.0-315/hive/lib/guava-28.0-jre.jar
> ADD JAR /usr/hdp/3.1.4.0-315/hive/lib/zookeeper-3.4.6.3.1.4.0-315.jar{code}
> but this problem always exist. 
> {code:java}
> Serialization trace:
> inputFileFormatClass (org.apache.hadoop.hive.ql.plan.TableDesc)
> tableInfo (org.apache.hadoop.hive.ql.plan.FileSinkDesc)
> conf (org.apache.hadoop.hive.ql.exec.FileSinkOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
> childOperators (org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator)
> reducer (org.apache.hadoop.hive.ql.plan.ReduceWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:185)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:326)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:314)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:759)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObjectOrNull(SerializationUtilities.java:203)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:218)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:218)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:180)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:134)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:40)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
> at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:218)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializ

[jira] [Work started] (HIVE-22247) HiveHFileOutputFormat throws FileNotFoundException when partition's task output empty

2020-07-08 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22247 started by xiepengjie.
-
> HiveHFileOutputFormat throws FileNotFoundException when partition's task 
> output empty
> -
>
> Key: HIVE-22247
> URL: https://issues.apache.org/jira/browse/HIVE-22247
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 2.2.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> When partition's task output empty, HiveHFileOutputFormat throws 
> FileNotFoundException like this:
> {code:java}
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 1 finished. closing... 
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
> 2019-09-24 19:15:55,915 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2019-09-24 19:15:55,954 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2019-09-24 19:15:56,089 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2019-09-24 19:15:56,090 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:287)
>   at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:453)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:200)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1016)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:278)
>   ... 7 more
> Caused by: java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:880)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:109)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:938)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:934)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:945)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1592)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1632)
>   at 
> org.apache.hadoop.hive.hbase.HiveHFileOutputFormat$1.close(HiveHFileOutputFormat.java:153)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.ja

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-18 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Attachment: (was: HIVE-22412.patch)

> StatsUtils throw NPE when explain
> -
>
> Key: HIVE-22412
> URL: https://issues.apache.org/jira/browse/HIVE-22412
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22412.patch
>
>
> The demo like this:
> {code:java}
> drop table if exists explain_npe_map;
> drop table if exists explain_npe_array;
> drop table if exists explain_npe_struct;
> create table explain_npe_map( c1 map );
> create table explain_npe_array  ( c1 array );
> create table explain_npe_struct ( c1 struct );
> -- error
> set hive.cbo.enable=false;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;
> -- correct
> set hive.cbo.enable=true;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;{code}
>  
> if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
> not.
> {code:java}
> hive> drop table if exists explain_npe_map;
> OK
> Time taken: 0.063 seconds
> hive> drop table if exists explain_npe_array;
> OK
> Time taken: 0.035 seconds
> hive> drop table if exists explain_npe_struct;
> OK
> Time taken: 0.015 seconds
> hive>
> > create table explain_npe_map( c1 map );
> OK
> Time taken: 0.584 seconds
> hive> create table explain_npe_array  ( c1 array );
> OK
> Time taken: 0.216 seconds
> hive> create table explain_npe_struct ( c1 struct );
> OK
> Time taken: 0.17 seconds
> hive>
> > set hive.cbo.enable=false;
> hive> explain select c1 from explain_npe_map where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_array where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_struct where c1 is null;
> FAILED: RuntimeException Error invoking signature method
> hive>
> > set hive.cbo.enable=true;
> hive> explain select c1 from explain_npe_map where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_map
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: map)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_array where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_array
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: array)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_struct where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_struct
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: struct)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
> hive>
> {code}
> ms error like:
> for map:
> {code:java}
> java.lang.Null

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-18 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Attachment: HIVE-22412.patch
Status: Patch Available  (was: Open)

> StatsUtils throw NPE when explain
> -
>
> Key: HIVE-22412
> URL: https://issues.apache.org/jira/browse/HIVE-22412
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.0.0, 1.2.1
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22412.patch, HIVE-22412.patch
>
>
> The demo like this:
> {code:java}
> drop table if exists explain_npe_map;
> drop table if exists explain_npe_array;
> drop table if exists explain_npe_struct;
> create table explain_npe_map( c1 map );
> create table explain_npe_array  ( c1 array );
> create table explain_npe_struct ( c1 struct );
> -- error
> set hive.cbo.enable=false;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;
> -- correct
> set hive.cbo.enable=true;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;{code}
>  
> if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
> not.
> {code:java}
> hive> drop table if exists explain_npe_map;
> OK
> Time taken: 0.063 seconds
> hive> drop table if exists explain_npe_array;
> OK
> Time taken: 0.035 seconds
> hive> drop table if exists explain_npe_struct;
> OK
> Time taken: 0.015 seconds
> hive>
> > create table explain_npe_map( c1 map );
> OK
> Time taken: 0.584 seconds
> hive> create table explain_npe_array  ( c1 array );
> OK
> Time taken: 0.216 seconds
> hive> create table explain_npe_struct ( c1 struct );
> OK
> Time taken: 0.17 seconds
> hive>
> > set hive.cbo.enable=false;
> hive> explain select c1 from explain_npe_map where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_array where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_struct where c1 is null;
> FAILED: RuntimeException Error invoking signature method
> hive>
> > set hive.cbo.enable=true;
> hive> explain select c1 from explain_npe_map where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_map
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: map)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_array where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_array
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: array)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_struct where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_struct
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: struct)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
> hive>
> {code}
> ms e

[jira] [Commented] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-14 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17135057#comment-17135057
 ] 

xiepengjie commented on HIVE-22412:
---

 [~kgyrtkirk] : hi, Could you help me to review this patch? 

> StatsUtils throw NPE when explain
> -
>
> Key: HIVE-22412
> URL: https://issues.apache.org/jira/browse/HIVE-22412
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22412.patch
>
>
> The demo like this:
> {code:java}
> drop table if exists explain_npe_map;
> drop table if exists explain_npe_array;
> drop table if exists explain_npe_struct;
> create table explain_npe_map( c1 map );
> create table explain_npe_array  ( c1 array );
> create table explain_npe_struct ( c1 struct );
> -- error
> set hive.cbo.enable=false;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;
> -- correct
> set hive.cbo.enable=true;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;{code}
>  
> if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
> not.
> {code:java}
> hive> drop table if exists explain_npe_map;
> OK
> Time taken: 0.063 seconds
> hive> drop table if exists explain_npe_array;
> OK
> Time taken: 0.035 seconds
> hive> drop table if exists explain_npe_struct;
> OK
> Time taken: 0.015 seconds
> hive>
> > create table explain_npe_map( c1 map );
> OK
> Time taken: 0.584 seconds
> hive> create table explain_npe_array  ( c1 array );
> OK
> Time taken: 0.216 seconds
> hive> create table explain_npe_struct ( c1 struct );
> OK
> Time taken: 0.17 seconds
> hive>
> > set hive.cbo.enable=false;
> hive> explain select c1 from explain_npe_map where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_array where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_struct where c1 is null;
> FAILED: RuntimeException Error invoking signature method
> hive>
> > set hive.cbo.enable=true;
> hive> explain select c1 from explain_npe_map where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_map
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: map)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_array where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_array
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: array)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_struct where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_struct
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: struct)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-14 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Attachment: HIVE-22412.patch

> StatsUtils throw NPE when explain
> -
>
> Key: HIVE-22412
> URL: https://issues.apache.org/jira/browse/HIVE-22412
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22412.patch
>
>
> The demo like this:
> {code:java}
> drop table if exists explain_npe_map;
> drop table if exists explain_npe_array;
> drop table if exists explain_npe_struct;
> create table explain_npe_map( c1 map );
> create table explain_npe_array  ( c1 array );
> create table explain_npe_struct ( c1 struct );
> -- error
> set hive.cbo.enable=false;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;
> -- correct
> set hive.cbo.enable=true;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;{code}
>  
> if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
> not.
> {code:java}
> hive> drop table if exists explain_npe_map;
> OK
> Time taken: 0.063 seconds
> hive> drop table if exists explain_npe_array;
> OK
> Time taken: 0.035 seconds
> hive> drop table if exists explain_npe_struct;
> OK
> Time taken: 0.015 seconds
> hive>
> > create table explain_npe_map( c1 map );
> OK
> Time taken: 0.584 seconds
> hive> create table explain_npe_array  ( c1 array );
> OK
> Time taken: 0.216 seconds
> hive> create table explain_npe_struct ( c1 struct );
> OK
> Time taken: 0.17 seconds
> hive>
> > set hive.cbo.enable=false;
> hive> explain select c1 from explain_npe_map where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_array where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_struct where c1 is null;
> FAILED: RuntimeException Error invoking signature method
> hive>
> > set hive.cbo.enable=true;
> hive> explain select c1 from explain_npe_map where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_map
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: map)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_array where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_array
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: array)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_struct where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_struct
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: struct)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
> hive>
> {code}
> ms error like:
> for map:
> {code:java}
> java.lang.NullPointerExce

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-14 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Description: 
The demo like this:
{code:java}
drop table if exists explain_npe_map;
drop table if exists explain_npe_array;
drop table if exists explain_npe_struct;

create table explain_npe_map( c1 map );
create table explain_npe_array  ( c1 array );
create table explain_npe_struct ( c1 struct );

-- error
set hive.cbo.enable=false;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;

-- correct
set hive.cbo.enable=true;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;{code}
 

if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
not.
{code:java}
hive> drop table if exists explain_npe_map;
OK
Time taken: 0.063 seconds
hive> drop table if exists explain_npe_array;
OK
Time taken: 0.035 seconds
hive> drop table if exists explain_npe_struct;
OK
Time taken: 0.015 seconds
hive>
> create table explain_npe_map( c1 map );
OK
Time taken: 0.584 seconds
hive> create table explain_npe_array  ( c1 array );
OK
Time taken: 0.216 seconds
hive> create table explain_npe_struct ( c1 struct );
OK
Time taken: 0.17 seconds
hive>
> set hive.cbo.enable=false;
hive> explain select c1 from explain_npe_map where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_array where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_struct where c1 is null;
FAILED: RuntimeException Error invoking signature method
hive>
> set hive.cbo.enable=true;
hive> explain select c1 from explain_npe_map where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_map
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: map)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_array where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_array
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: array)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_struct where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_struct
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: struct)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
hive>
{code}
ms error like:

for map:
{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:124)
at 
org.apache.

[jira] [Commented] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-10 Thread xiepengjie (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17130444#comment-17130444
 ] 

xiepengjie commented on HIVE-22412:
---

[~kgyrtkirk]: Thanks, the version is Hive 3.2.0-SNAPSHOT

> StatsUtils throw NPE when explain
> -
>
> Key: HIVE-22412
> URL: https://issues.apache.org/jira/browse/HIVE-22412
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> The demo like this:
> {code:java}
> drop table if exists explain_npe_map;
> drop table if exists explain_npe_array;
> drop table if exists explain_npe_struct;
> create table explain_npe_map( c1 map );
> create table explain_npe_array  ( c1 array );
> create table explain_npe_struct ( c1 struct );
> -- error
> set hive.cbo.enable=false;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;
> -- correct
> set hive.cbo.enable=true;
> explain select c1 from explain_npe_map where c1 is null;
> explain select c1 from explain_npe_array where c1 is null;
> explain select c1 from explain_npe_struct where c1 is null;{code}
>  
> if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
> not.
> {code:java}
> hive> drop table if exists explain_npe_map;
> OK
> Time taken: 0.063 seconds
> hive> drop table if exists explain_npe_array;
> OK
> Time taken: 0.035 seconds
> hive> drop table if exists explain_npe_struct;
> OK
> Time taken: 0.015 seconds
> hive>
> > create table explain_npe_map( c1 map );
> OK
> Time taken: 0.584 seconds
> hive> create table explain_npe_array  ( c1 array );
> OK
> Time taken: 0.216 seconds
> hive> create table explain_npe_struct ( c1 struct );
> OK
> Time taken: 0.17 seconds
> hive>
> > set hive.cbo.enable=false;
> hive> explain select c1 from explain_npe_map where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_array where c1 is null;
> FAILED: NullPointerException null
> hive> explain select c1 from explain_npe_struct where c1 is null;
> FAILED: RuntimeException Error invoking signature method
> hive>
> > set hive.cbo.enable=true;
> hive> explain select c1 from explain_npe_map where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_map
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: map)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_array where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_array
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: array)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
> hive> explain select c1 from explain_npe_struct where c1 is null;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stageSTAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: explain_npe_struct
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
>   Filter Operator
> predicate: false (type: boolean)
> Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
> stats: NONE
> Select Operator
>   expressions: c1 (type: struct)
>   outputColumnNames: _col0
>   Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL 
> Column stats: NONE
>   ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
> hive>
> {code}
> ms error like:
> for map:
>

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-10 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Description: 
The demo like this:
{code:java}
drop table if exists explain_npe_map;
drop table if exists explain_npe_array;
drop table if exists explain_npe_struct;

create table explain_npe_map( c1 map );
create table explain_npe_array  ( c1 array );
create table explain_npe_struct ( c1 struct );

-- error
set hive.cbo.enable=false;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;

-- correct
set hive.cbo.enable=true;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;{code}
 

if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
not.
{code:java}
hive> drop table if exists explain_npe_map;
OK
Time taken: 0.063 seconds
hive> drop table if exists explain_npe_array;
OK
Time taken: 0.035 seconds
hive> drop table if exists explain_npe_struct;
OK
Time taken: 0.015 seconds
hive>
> create table explain_npe_map( c1 map );
OK
Time taken: 0.584 seconds
hive> create table explain_npe_array  ( c1 array );
OK
Time taken: 0.216 seconds
hive> create table explain_npe_struct ( c1 struct );
OK
Time taken: 0.17 seconds
hive>
> set hive.cbo.enable=false;
hive> explain select c1 from explain_npe_map where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_array where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_struct where c1 is null;
FAILED: RuntimeException Error invoking signature method
hive>
> set hive.cbo.enable=true;
hive> explain select c1 from explain_npe_map where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_map
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: map)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_array where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_array
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: array)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_struct where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_struct
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: struct)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
hive>
{code}
ms error like:

for map:
{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:124)
at 
org.apache.

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-10 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Description: 
The demo like this:
{code:java}
drop table if exists explain_npe_map;
drop table if exists explain_npe_array;
drop table if exists explain_npe_struct;

create table explain_npe_map( c1 map );
create table explain_npe_array  ( c1 array );
create table explain_npe_struct ( c1 struct );

-- error
set hive.cbo.enable=false;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;

-- correct
set hive.cbo.enable=true;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;{code}
 

if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
not.
{code:java}
hive> drop table if exists explain_npe_map;
OK
Time taken: 0.063 seconds
hive> drop table if exists explain_npe_array;
OK
Time taken: 0.035 seconds
hive> drop table if exists explain_npe_struct;
OK
Time taken: 0.015 seconds
hive>
> create table explain_npe_map( c1 map );
OK
Time taken: 0.584 seconds
hive> create table explain_npe_array  ( c1 array );
OK
Time taken: 0.216 seconds
hive> create table explain_npe_struct ( c1 struct );
OK
Time taken: 0.17 seconds
hive>
> set hive.cbo.enable=false;
hive> explain select c1 from explain_npe_map where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_array where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_struct where c1 is null;
FAILED: RuntimeException Error invoking signature method
hive>
> set hive.cbo.enable=true;
hive> explain select c1 from explain_npe_map where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_map
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: map)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_array where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_array
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: array)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_struct where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_struct
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: struct)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
hive>
{code}
ms error like:

for map:
{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:124)
at 
org.apache.

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-10 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Description: 
The demo like this:
{code:java}
drop table if exists explain_npe_map;
drop table if exists explain_npe_array;
drop table if exists explain_npe_struct;

create table explain_npe_map( c1 map );
create table explain_npe_array  ( c1 array );
create table explain_npe_struct ( c1 struct );

-- error
set hive.cbo.enable=false;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;

-- correct
set hive.cbo.enable=true;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;{code}
 

if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
not.
{code:java}
hive> drop table if exists explain_npe_map;
OK
Time taken: 0.063 seconds
hive> drop table if exists explain_npe_array;
OK
Time taken: 0.035 seconds
hive> drop table if exists explain_npe_struct;
OK
Time taken: 0.015 seconds
hive>
> create table explain_npe_map( c1 map );
OK
Time taken: 0.584 seconds
hive> create table explain_npe_array  ( c1 array );
OK
Time taken: 0.216 seconds
hive> create table explain_npe_struct ( c1 struct );
OK
Time taken: 0.17 seconds
hive>
> set hive.cbo.enable=false;
hive> explain select c1 from explain_npe_map where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_array where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_struct where c1 is null;
FAILED: RuntimeException Error invoking signature method
hive>
> set hive.cbo.enable=true;
hive> explain select c1 from explain_npe_map where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_map
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: map)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_array where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_array
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: array)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_struct where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_struct
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: struct)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
hive>
{code}
ms error like:

for map:
{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:124)
at 
org.apache.

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-10 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Description: 
The demo like this:
{code:java}
drop table if exists explain_npe_map;
drop table if exists explain_npe_array;
drop table if exists explain_npe_struct;

create table explain_npe_map( c1 map );
create table explain_npe_array  ( c1 array );
create table explain_npe_struct ( c1 struct );

-- error
set hive.cbo.enable=false;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;

-- correct
set hive.cbo.enable=true;
explain select c1 from explain_npe_map where c1 is null;
explain select c1 from explain_npe_array where c1 is null;
explain select c1 from explain_npe_struct where c1 is null;{code}
 

if the conf 'hive.cbo.enable' set false , NPE will be thrown ; otherwise will 
not.
{code:java}
hive> drop table if exists explain_npe_map;
OK
Time taken: 0.063 seconds
hive> drop table if exists explain_npe_array;
OK
Time taken: 0.035 seconds
hive> drop table if exists explain_npe_struct;
OK
Time taken: 0.015 seconds
hive>
> create table explain_npe_map( c1 map );
OK
Time taken: 0.584 seconds
hive> create table explain_npe_array  ( c1 array );
OK
Time taken: 0.216 seconds
hive> create table explain_npe_struct ( c1 struct );
OK
Time taken: 0.17 seconds
hive>
> set hive.cbo.enable=false;
hive> explain select c1 from explain_npe_map where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_array where c1 is null;
FAILED: NullPointerException null
hive> explain select c1 from explain_npe_struct where c1 is null;
FAILED: RuntimeException Error invoking signature method
hive>
> set hive.cbo.enable=true;
hive> explain select c1 from explain_npe_map where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_map
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: map)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.593 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_array where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_array
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: array)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 1.969 seconds, Fetched: 20 row(s)
hive> explain select c1 from explain_npe_struct where c1 is null;
OK
STAGE DEPENDENCIES:
  Stage-0 is a root stageSTAGE PLANS:
  Stage: Stage-0
Fetch Operator
  limit: -1
  Processor Tree:
TableScan
  alias: explain_npe_struct
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  Filter Operator
predicate: false (type: boolean)
Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
Select Operator
  expressions: c1 (type: struct)
  outputColumnNames: _col0
  Statistics: Num rows: 1 Data size: 0 Basic stats: PARTIAL Column 
stats: NONE
  ListSinkTime taken: 2.932 seconds, Fetched: 20 row(s)
hive>
{code}
ms error like:
{code:java}
2019-10-10 09:11:52,670 ERROR ql.Driver (SessionState.java:printError(1068)) - 
FAILED: NullPointerException null
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
at 
o

[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-10 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Affects Version/s: 1.2.1
   2.0.0
   3.0.0

> StatsUtils throw NPE when explain
> -
>
> Key: HIVE-22412
> URL: https://issues.apache.org/jira/browse/HIVE-22412
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> The demo like this:
> {code:java}
> set hive.cbo.enable=false;
> create table explain_npe ( c1 map );
> explain select c1 from explain_npe where c1 is null;
> create table explain_npe_1 ( c1 array );
> explain select c1 from explain_npe_1 where c1 is null;{code}
> error like:
> {code:java}
> 2019-10-10 09:11:52,670 ERROR ql.Driver (SessionState.java:printError(1068)) 
> - FAILED: NullPointerException null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:124)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:111)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10205)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:210)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:425)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:309)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1153)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1206)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22412) StatsUtils throw NPE when explain

2020-06-10 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22412:
--
Description: 
The demo like this:
{code:java}
set hive.cbo.enable=false;

create table explain_npe ( c1 map );
explain select c1 from explain_npe where c1 is null;

create table explain_npe_1 ( c1 array );
explain select c1 from explain_npe_1 where c1 is null;{code}
error like:
{code:java}
2019-10-10 09:11:52,670 ERROR ql.Driver (SessionState.java:printError(1068)) - 
FAILED: NullPointerException null
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:124)
at 
org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:111)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
at 
org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
at 
org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
at 
org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:192)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10205)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:210)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
at 
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:425)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:309)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1153)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1206)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

{code}

  was:
The demo like this:
{code:java}
create table explain_npe ( c1 map );
explain select c1 from explain_npe where c1 is null;{code}
error like:
{code:java}
2019-10-10 09:11:52,670 ERROR ql.Driver (SessionState.java:printError(1068)) - 
FAILED: NullPointerException null
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
at 
org.apache.hadoop.hive.ql.stats.StatsUtils.

[jira] [Assigned] (HIVE-22412) StatsUtils throw NPE when explain

2019-10-28 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-22412:
-


> StatsUtils throw NPE when explain
> -
>
> Key: HIVE-22412
> URL: https://issues.apache.org/jira/browse/HIVE-22412
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> The demo like this:
> {code:java}
> create table explain_npe ( c1 map );
> explain select c1 from explain_npe where c1 is null;{code}
> error like:
> {code:java}
> 2019-10-10 09:11:52,670 ERROR ql.Driver (SessionState.java:printError(1068)) 
> - FAILED: NullPointerException null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1045)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfComplexTypes(StatsUtils.java:931)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.getAvgColLenOfVariableLengthTypes(StatsUtils.java:869)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.estimateRowSizeFromSchema(StatsUtils.java:526)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:223)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:136)
> at 
> org.apache.hadoop.hive.ql.stats.StatsUtils.collectStatistics(StatsUtils.java:124)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.StatsRulesProcFactory$TableScanStatsRule.process(StatsRulesProcFactory.java:111)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:95)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:79)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:110)
> at 
> org.apache.hadoop.hive.ql.optimizer.stats.annotation.AnnotateWithStatistics.transform(AnnotateWithStatistics.java:78)
> at 
> org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10205)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:210)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
> at 
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:425)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:309)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1153)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1206)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1082)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-14650) Select fails when ORC file has more columns than table schema

2019-10-12 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-14650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-14650:
-

Assignee: (was: xiepengjie)

> Select fails when ORC file has more columns than table schema
> -
>
> Key: HIVE-14650
> URL: https://issues.apache.org/jira/browse/HIVE-14650
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Jeff Mink
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22247) HiveHFileOutputFormat throws FileNotFoundException when partition's task output empty

2019-09-26 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22247:
--
Description: 
When partition's task output empty, HiveHFileOutputFormat throws 
FileNotFoundException like this:
{code:java}
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: 1 finished. closing... 
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 0
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
2019-09-24 19:15:55,915 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
Committer Algorithm version is 1
2019-09-24 19:15:55,954 INFO [main] 
org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
2019-09-24 19:15:56,089 ERROR [main] ExecReducer: Hit error while closing 
operators - failing tree
2019-09-24 19:15:56,090 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.RuntimeException: Hive Runtime Error while 
closing operators: java.io.FileNotFoundException: File 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
 does not exist.
  at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:287)
  at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:453)
  at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.FileNotFoundException: File 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
 does not exist.
  at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:200)
  at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1016)
  at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617)
  at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
  at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:278)
  ... 7 more
Caused by: java.io.FileNotFoundException: File 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
 does not exist.
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:880)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:109)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:938)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:934)
  at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:945)
  at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1592)
  at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1632)
  at 
org.apache.hadoop.hive.hbase.HiveHFileOutputFormat$1.close(HiveHFileOutputFormat.java:153)
  at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:197)
  ... 11 more

2019-09-24 19:15:56,093 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
cleanup for the task
{code}
I think we should skip it if srcDir do not exist, fix like this:
{code:java}
@Override
public void close(boolean abort) throws IOException {
  try {

...

FileStatus [] files = null;
for (;;) {
  try {
files = fs.listStatus(srcDir, FileUtils.STAGING_DIR_PATH_FILTER);
  } catch (FileNotFoundException fnfe) {
LOG.error(String.format("Output data is empty, please check Task [ %s 
]", tac.getTaskAttemptID().toString()), fnfe);
break;
  }
}
if (files != null ) {
  for (Fil

[jira] [Updated] (HIVE-22247) HiveHFileOutputFormat throws FileNotFoundException when partition's task output empty

2019-09-26 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22247:
--
Component/s: HBase Handler

> HiveHFileOutputFormat throws FileNotFoundException when partition's task 
> output empty
> -
>
> Key: HIVE-22247
> URL: https://issues.apache.org/jira/browse/HIVE-22247
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 2.2.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> When partition's task output empty, HiveHFileOutputFormat throws 
> FileNotFoundException like this:
> {code:java}
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 1 finished. closing... 
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
> 2019-09-24 19:15:55,915 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2019-09-24 19:15:55,954 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2019-09-24 19:15:56,089 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2019-09-24 19:15:56,090 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:287)
>   at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:453)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:200)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1016)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:278)
>   ... 7 more
> Caused by: java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:880)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:109)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:938)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:934)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:945)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1592)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1632)
>   at 
> org.apache.hadoop.hive.hbase.HiveHFileOutputFormat$1.close(HiveHFileOutputFormat.java:153)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkO

[jira] [Updated] (HIVE-22247) HiveHFileOutputFormat throws FileNotFoundException when partition's task output empty

2019-09-26 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22247:
--
Description: 
When partition's task output empty, HiveHFileOutputFormat throws 
FileNotFoundException like this:
{code:java}
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: 1 finished. closing... 
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 0
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
2019-09-24 19:15:55,886 INFO [main] 
org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
2019-09-24 19:15:55,915 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
Committer Algorithm version is 1
2019-09-24 19:15:55,954 INFO [main] 
org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
deprecated. Instead, use io.native.lib.available
2019-09-24 19:15:56,089 ERROR [main] ExecReducer: Hit error while closing 
operators - failing tree
2019-09-24 19:15:56,090 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.RuntimeException: Hive Runtime Error while 
closing operators: java.io.FileNotFoundException: File 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
 does not exist.
  at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:287)
  at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:453)
  at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.io.FileNotFoundException: File 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
 does not exist.
  at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:200)
  at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1016)
  at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617)
  at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
  at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:278)
  ... 7 more
Caused by: java.io.FileNotFoundException: File 
hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
 does not exist.
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:880)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:109)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:938)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:934)
  at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:945)
  at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1592)
  at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1632)
  at 
org.apache.hadoop.hive.hbase.HiveHFileOutputFormat$1.close(HiveHFileOutputFormat.java:153)
  at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:197)
  ... 11 more

2019-09-24 19:15:56,093 INFO [main] org.apache.hadoop.mapred.Task: Runnning 
cleanup for the task
{code}
I think we should skip it if srcDir do not exist, fix like this:
{code:java}
@Override
public void close(boolean abort) throws IOException {
  try {

...

FileStatus [] files = null;
for (;;) {
  try {
files = fs.listStatus(srcDir, FileUtils.STAGING_DIR_PATH_FILTER);
  } catch (FileNotFoundException fnfe) {
LOG.error(String.format("Output data is empty, please check Task [ %s 
]", tac.getTaskAttemptID().toString()), fnfe);
break;
  }

   ...

  } catch (InterruptedException ex) {

[jira] [Assigned] (HIVE-22247) HiveHFileOutputFormat throws FileNotFoundException when partition's task output empty

2019-09-26 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-22247:
-


> HiveHFileOutputFormat throws FileNotFoundException when partition's task 
> output empty
> -
>
> Key: HIVE-22247
> URL: https://issues.apache.org/jira/browse/HIVE-22247
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> When partition's task output empty, HiveHFileOutputFormat throws 
> FileNotFoundException like this:
> {code:java}
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: 1 finished. closing... 
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: FS[1]: records written - 0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
> 2019-09-24 19:15:55,886 INFO [main] 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_tmp.-ext-10002/02_0
> 2019-09-24 19:15:55,915 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2019-09-24 19:15:55,954 INFO [main] 
> org.apache.hadoop.conf.Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 2019-09-24 19:15:56,089 ERROR [main] ExecReducer: Hit error while closing 
> operators - failing tree
> 2019-09-24 19:15:56,090 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.RuntimeException: Hive Runtime Error 
> while closing operators: java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:287)
>   at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:453)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:200)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1016)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:617)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:631)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.close(ExecReducer.java:278)
>   ... 7 more
> Caused by: java.io.FileNotFoundException: File 
> hdfs://Hdptest-mini-nmg/tmp/hive-staging/hadoop_hive_2019-09-24_19-15-26_453_1697529445006435790-5/_task_tmp.-ext-10002/_tmp.02_0
>  does not exist.
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:880)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:109)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:938)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:934)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:945)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1592)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1632)
>   at 
> org.apache.hadoop.hive.hbase.HiveHFileOutputFormat$1.close(HiveHFileOutputFormat.java:153)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:197)
>   ... 11 more
> 2019-09-24 19:15:56,093 I

[jira] [Assigned] (HIVE-14650) Select fails when ORC file has more columns than table schema

2019-08-22 Thread xiepengjie (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-14650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-14650:
-

Assignee: xiepengjie

> Select fails when ORC file has more columns than table schema
> -
>
> Key: HIVE-14650
> URL: https://issues.apache.org/jira/browse/HIVE-14650
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
>Reporter: Jeff Mink
>Assignee: xiepengjie
>Priority: Minor
>
> When SELECTing from a Hive ORC table, the following IndexOutOfBoundsException 
> is thrown if the underlying ORC file has 4 or more columns than the Hive 
> schema (where N is the number of columns in the ORC file).
> {noformat}
> Failed with exception 
> java.io.IOException:java.lang.IndexOutOfBoundsException: toIndex = N
> 16/08/25 15:22:19 ERROR CliDriver: Failed with exception 
> java.io.IOException:java.lang.IndexOutOfBoundsException: toIndex = N
> java.io.IOException: java.lang.IndexOutOfBoundsException: toIndex = N
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:507)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:414)
> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:140)
> at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1686)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.lang.IndexOutOfBoundsException: toIndex = 6
> at java.util.ArrayList.subListRangeCheck(ArrayList.java:1004)
> at java.util.ArrayList.subList(ArrayList.java:996)
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.getSchemaOnRead(RecordReaderFactory.java:161)
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderFactory.createTreeReader(RecordReaderFactory.java:66)
> at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:202)
> at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:541)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:183)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$OriginalReaderPair.(OrcRawRecordMerger.java:226)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:437)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1216)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1113)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:673)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:323)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:445)
> ... 15 more
> {noformat}
> This error appears to be related to the patch of HIVE-10591.
> Steps to reproduce (Hive QL):
> {noformat}
> DROP TABLE IF EXISTS orc_drop_column;
> CREATE TABLE orc_drop_column (`id` int, `name` string, `description` string, 
> `somevalue` double, `someflag` boolean, `somedate` timestamp) STORED AS ORC;
> INSERT INTO TABLE orc_drop_column select * from (select 1, 'my_name', 
> 'my_desc', 5.5, true, '2016-08-25 06:00:00') a;
> ALTER TABLE orc_drop_column SET SERDE 
> 'org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe';
> ALTER TABLE orc_drop_column REPLACE COLUMNS (
>   `id` int,
>   `name` string
> );
> ALTER TABLE orc_drop_column SET SERDE 
> 'org.apache.hadoop.hive.ql.io.orc.OrcSerde';
> SELECT id, name FROM orc_drop_column;
> {noformat}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (HIVE-21719) Use Random.nextDouble instead of Math.random

2019-08-08 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16902799#comment-16902799
 ] 

xiepengjie commented on HIVE-21719:
---

What are the problems that can be solved by this patch?

> Use Random.nextDouble instead of Math.random
> 
>
> Key: HIVE-21719
> URL: https://issues.apache.org/jira/browse/HIVE-21719
> Project: Hive
>  Issue Type: Improvement
>Reporter: bd2019us
>Priority: Trivial
> Attachments: HIVE-21719.patch
>
>
> Performance overhead from Math.random can be reduced by using 
> Random.nextDouble instead.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-14298) NPE could be thrown in HMS when an ExpressionTree could not be made from a filter

2019-08-07 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901947#comment-16901947
 ] 

xiepengjie commented on HIVE-14298:
---

Hi, [~ctang], 

About your test case:
{code:java}
It is quite easy to reproduce the NPE issue with following steps:
set the HMS configurations:
hive.metastore.try.direct.sql to true
hive.metastore.limit.partition.request to a certain positive integer (-1 means 
disabled which is default).
Run query like 
select * from sample_pt where code in ('53-5022', '53-5023') and dummy like 
'%1';
you will get "FAILED: SemanticException java.lang.NullPointerException"
{code}
How to create a table named sample_pt which can reproduce this?

> NPE could be thrown in HMS when an ExpressionTree could not be made from a 
> filter
> -
>
> Key: HIVE-14298
> URL: https://issues.apache.org/jira/browse/HIVE-14298
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
>Priority: Major
> Fix For: 2.1.1, 2.2.0
>
> Attachments: HIVE-14298.patch, HIVE-14298.patch, HIVE-14298.patch
>
>
> In many cases where an ExpressionTree could not be made from a filter (e.g. 
> parser fails to parse a filter etc.) and its value is null. But this null is 
> passed around and used by a couple of HMS methods which can cause 
> NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-06 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16901655#comment-16901655
 ] 

xiepengjie commented on HIVE-22040:
---

Thanks [~jdere] for reviewing the patch.

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.03.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-05 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: HIVE-22040.03.patch

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.03.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-05 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: (was: HIVE-22040.03.patch)

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-08-05 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: HIVE-22040.03.patch

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.03.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895188#comment-16895188
 ] 

xiepengjie commented on HIVE-22040:
---

Hi  [~jdere], could you please help me to review this error which "does not 
exist in index"?

However i checkout a branch from remote branch which named branch-3, and the 
test is running successed in local environment by command "mvn test 
-Dtest=TestCliDriver -Dqfile=drop_deleted_partitions.q 
-Dtest.output.overwrite=true".

 

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: HIVE-22040.02.patch

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: (was: HIVE-22040.02.patch)

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Component/s: (was: Metastore)
 Standalone Metastore

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: (was: HIVE-22040.02.patch)

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: HIVE-22040.02.patch

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Affects Version/s: (was: 2.0.0)
   (was: 1.2.1)

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895015#comment-16895015
 ] 

xiepengjie edited comment on HIVE-22040 at 7/29/19 8:26 AM:


Hi, [~jdere] ,

Thanks for your replay.

HIVE-17472 has fixed the bug of dropping partition which the data's path do not 
exist, maybe the case of HIVE-17472 is different from mine, the issue of mine 
invoked when dropped partition which the data's parent path do not exist. 
Perhaps this exception throwed because the function 
org.apache.hadoop.hive.metastore.Warehouse#isEmpty does not catch the exception 
of FNFE.

 

 


was (Author: xiepengjie):
Hi, [~jdere] ,

Thanks for your replay.

HIVE-17472 has fixed the the bug of dropping partition which the data's path do 
not exist, maybe the case of HIVE-17472 is different from mine, the issue of 
mine invoked when dropped partition which the data's parent path do not exist. 
Perhaps this exception throwed because the function 
org.apache.hadoop.hive.metastore.Warehouse#isEmpty does not catch the exception 
of FNFE.

 

 

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895015#comment-16895015
 ] 

xiepengjie edited comment on HIVE-22040 at 7/29/19 7:44 AM:


Hi, [~jdere] ,

Thanks for your replay.

HIVE-17472 has fixed the the bug of dropping partition which the data's path do 
not exist, maybe the case of HIVE-17472 is different from mine, the issue of 
mine invoked when dropped partition which the data's parent path do not exist. 
Perhaps this exception throwed because the function 
org.apache.hadoop.hive.metastore.Warehouse#isEmpty does not catch the exception 
of FNFE.

 

 


was (Author: xiepengjie):
Hi, Jason Dere,

Thanks for your replay.

HIVE-17472 has fixed the the bug of dropping partition which the data's path do 
not exist, maybe the case of HIVE-17472 is different from mine, the issue of 
mine invoked when dropped partition which the data's parent path do not exist. 
Perhaps this exception throwed because the function 
org.apache.hadoop.hive.metastore.Warehouse#isEmpty does not catch the exception 
of FNFE.

 

 

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-29 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895015#comment-16895015
 ] 

xiepengjie commented on HIVE-22040:
---

Hi, Jason Dere,

Thanks for your replay.

HIVE-17472 has fixed the the bug of dropping partition which the data's path do 
not exist, maybe the case of HIVE-17472 is different from mine, the issue of 
mine invoked when dropped partition which the data's parent path do not exist. 
Perhaps this exception throwed because the function 
org.apache.hadoop.hive.metastore.Warehouse#isEmpty does not catch the exception 
of FNFE.

 

 

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-28 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: HIVE-22040.02.patch

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.02.patch, 
> HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-28 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Description: 
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will fail if  
connecting hiveserver2 with jdbc by java, this problem also exists in master 
branch, I  think it is very unfriendly and we should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
(state=08S01,code=1)
 {code}

  was:
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will fail if  
connecting hiveserver2 with jdbc by java, this problem also exists in master 
branch, I  think it is very unfriendly and we should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Client

[jira] [Commented] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-26 Thread xiepengjie (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16893649#comment-16893649
 ] 

xiepengjie commented on HIVE-22040:
---

[~jdere] hi, Could you help me to review this patch? 

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-26 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: HIVE-22040.01.patch

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.01.patch, HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-25 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Status: Patch Available  (was: In Progress)

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 3.0.0, 2.0.0, 1.2.1
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-25 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22040 started by xiepengjie.
-
> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work stopped] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-25 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22040 stopped by xiepengjie.
-
> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Work started] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-25 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-22040 started by xiepengjie.
-
> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-25 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Affects Version/s: 2.0.0
   3.0.0

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1, 2.0.0, 3.0.0
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-25 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Attachment: HIVE-22040.patch

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
> Attachments: HIVE-22040.patch
>
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, this problem also 
> exists in master branch, I  think it is very unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Description: 
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will fail if  
connecting hiveserver2 with jdbc by java, this problem also exists in master 
branch, I  think it is very unfriendly and we should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
(state=08S01,code=1)
 {code}

  was:
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will fail if  
connecting hiveserver2 with jdbc by java, I  think it is very unfriendly and we 
should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 

[jira] [Assigned] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-22040:
-

Assignee: xiepengjie

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will fail if  connecting hiveserver2 with jdbc by java, I  think it is very 
> unfriendly and we should fix it.
> Example:
> – First, create manage table with nulti partition columns, and add partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> – Second, delete the path of partition 'month=07':
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Description: 
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will fail if  
connecting hiveserver2 with jdbc by java, I  think it is very unfriendly and we 
should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
(state=08S01,code=1)
 {code}

  was:
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will failed if  
connect hiveserver2 with jdbc by java, I  think it is very unfriendly and we 
should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Serv

[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Description: 
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will failed if  
connect hiveserver2 with jdbc by java, I  think it is very unfriendly and we 
should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
(state=08S01,code=1)
 {code}

  was:
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will failed if 
using jdbc by java to connec hiveserver2, I  think it is very unfriendly and we 
should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Ser

[jira] [Updated] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie updated HIVE-22040:
--
Description: 
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will failed if 
using jdbc by java to connec hiveserver2, I  think it is very unfriendly and we 
should fix it.

Example:

– First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
– Second, delete the path of partition 'month=07':
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
(state=08S01,code=1)
 {code}

  was:
I create a manage table with multi partition columns, when i try to drop 
partition throws exception with 'Failed to delete parent: File does not exist' 
when the partition's parent path does not exist. The partition's metadata in 
mysql has been deleted, but the exception is still thrown. it will failed if 
using jdbc by java to connec hiveserver2, I  think it is very unfriendly and we 
should fix it.

Example:

-- First, create manage table with nulti partition columns, and add partitions:
{code:java}
drop table if exists t1;

create table t1 (c1 int) partitioned by (year string, month string, day string);

alter table t1 add partition(year='2019', month='07', day='01');{code}
-- Second, delete the path of partition 'month=07':

 
{code:java}
hadoop fs -rm -r 
/user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
 

--  Third, when i try to drop partition, the metastore throws exception with 
'Failed to delete parent: File does not exist' .

 
{code:java}
alter table t1 drop partition partition(year='2019', month='07', day='01');
{code}
exception like this:

 

 
{code:java}
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File does 
not exist: /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.Pr

[jira] [Assigned] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-22040:
-

Assignee: (was: xiepengjie)

> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1
>Reporter: xiepengjie
>Priority: Major
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will failed if using jdbc by java to connec hiveserver2, I  think it is very 
> unfriendly and we should fix it.
> Example:
> -- First, create manage table with nulti partition columns, and add 
> partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> -- Second, delete the path of partition 'month=07':
>  
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
>  
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
>  
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
>  
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (HIVE-22040) Drop partition throws exception with 'Failed to delete parent: File does not exist' when the partition's parent path does not exists

2019-07-24 Thread xiepengjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-22040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiepengjie reassigned HIVE-22040:
-


> Drop partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exists
> 
>
> Key: HIVE-22040
> URL: https://issues.apache.org/jira/browse/HIVE-22040
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 1.2.1
>Reporter: xiepengjie
>Assignee: xiepengjie
>Priority: Major
>
> I create a manage table with multi partition columns, when i try to drop 
> partition throws exception with 'Failed to delete parent: File does not 
> exist' when the partition's parent path does not exist. The partition's 
> metadata in mysql has been deleted, but the exception is still thrown. it 
> will failed if using jdbc by java to connec hiveserver2, I  think it is very 
> unfriendly and we should fix it.
> Example:
> -- First, create manage table with nulti partition columns, and add 
> partitions:
> {code:java}
> drop table if exists t1;
> create table t1 (c1 int) partitioned by (year string, month string, day 
> string);
> alter table t1 add partition(year='2019', month='07', day='01');{code}
> -- Second, delete the path of partition 'month=07':
>  
> {code:java}
> hadoop fs -rm -r 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07{code}
>  
> --  Third, when i try to drop partition, the metastore throws exception with 
> 'Failed to delete parent: File does not exist' .
>  
> {code:java}
> alter table t1 drop partition partition(year='2019', month='07', day='01');
> {code}
> exception like this:
>  
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Failed to delete parent: File 
> does not exist: 
> /user/hadoop/xiepengjietest.db/drop_partition/year=2019/month=07
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummaryInt(FSDirStatAndListingOp.java:493)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getContentSummary(FSDirStatAndListingOp.java:140)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3995)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:1202)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:883)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2115)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2111)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2111) 
> (state=08S01,code=1)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)