[ 
https://issues.apache.org/jira/browse/HIVE-25912?focusedWorklogId=723471&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-723471
 ]

ASF GitHub Bot logged work on HIVE-25912:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Feb/22 07:08
            Start Date: 09/Feb/22 07:08
    Worklog Time Spent: 10m 
      Work Description: baifachuan commented on pull request #2987:
URL: https://github.com/apache/hive/pull/2987#issuecomment-1033417977


   > Maybe an overkill, but shall we create a test which makes sure that the 
method is not removed from the create table path?
   > 
   > Maybe something like this: 
https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/client/TestTablesCreateDropAlterTruncate.java#L341
   
   Thanks for your reply. I think this feature only adds a verification for the 
storage location, its compatibility with other tests.
   
   I make sure to create fail when the location is ROOT path and make sure 
other tests pass.
   
   So I add two tests to verify the ROOT path and not the ROOT path. 
   
   TestTablesCreateDropAlterTruncate.java tests maybe like an E2E test.
   
   I will consider adding a test for this case.
   
   thx.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 723471)
    Remaining Estimate: 81h 20m  (was: 81.5h)
            Time Spent: 14h 40m  (was: 14.5h)

> Drop external table at root of s3 bucket throws NPE
> ---------------------------------------------------
>
>                 Key: HIVE-25912
>                 URL: https://issues.apache.org/jira/browse/HIVE-25912
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 3.1.2
>         Environment: Hive version: 3.1.2
>            Reporter: Fachuan Bai
>            Assignee: Fachuan Bai
>            Priority: Major
>              Labels: metastore, pull-request-available
>         Attachments: hive bugs.png, hive-bug-01.png
>
>   Original Estimate: 96h
>          Time Spent: 14h 40m
>  Remaining Estimate: 81h 20m
>
> ENV:
> Hive 3.1.2
> HDFS:3.3.1
> enable OpenLDAP and Ranger .
>  
> I create the external hive table using this command:
>  
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 'hdfs://emr-master-1:8020/';
> {code}
>  
> The table was created successfully, but  when I drop the table throw the NPE:
>  
> {code:java}
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> MetaException(message:java.lang.NullPointerException) 
> (state=08S01,code=1){code}
>  
> The same bug can reproduction on the other object storage file system, such 
> as S3 or TOS:
> {code:java}
> CREATE EXTERNAL TABLE `fcbai`(
> `inv_item_sk` int,
> `inv_warehouse_sk` int,
> `inv_quantity_on_hand` int)
> PARTITIONED BY (
> `inv_date_sk` int) STORED AS ORC
> LOCATION
> 's3a://bucketname/'; // 'tos://bucketname/'{code}
>  
> I see the source code found:
>  common/src/java/org/apache/hadoop/hive/common/FileUtils.java
> {code:java}
> // check if sticky bit is set on the parent dir
> FileStatus parStatus = fs.getFileStatus(path.getParent());
> if (!shims.hasStickyBit(parStatus.getPermission())) {
>   // no sticky bit, so write permission on parent dir is sufficient
>   // no further checks needed
>   return;
> }{code}
>  
> because I set the table location to HDFS root path 
> (hdfs://emr-master-1:8020/), so the  path.getParent() function will be return 
> null cause the NPE.
> I think have four solutions to fix the bug:
>  # modify the create table function, if the location is root dir return 
> create table fail.
>  # modify the  FileUtils.checkDeletePermission function, check the 
> path.getParent(), if it is null, the function return, drop successfully.
>  # modify the RangerHiveAuthorizer.checkPrivileges function of the hive 
> ranger plugin(in ranger rep), if the location is root dir return create table 
> fail.
>  # modify the HDFS Path object, if the URI is root dir, path.getParent() 
> return not null.
> I recommend the first or second method, any suggestion for me? thx.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to