[ 
https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16862544#comment-16862544
 ] 

Eric Yang commented on HDDS-1609:
---------------------------------

I basically run the test.sh in compose/ozone as a different uid other than 
1000.  The test output are generated as test user, which is uid 1000 on my 
host.  Here is the output:

{code}
[eyang@localhost ozone]$ id eyang
uid=501(eyang) gid=1001(eyang) groups=1001(eyang),20(games),1000(docker)
[eyang@localhost ozone]$ pwd
/home/eyang/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone
[eyang@localhost ozone]$ ./test.sh 
Removing network ozone_default
WARNING: Network ozone_default not found.
Creating network "ozone_default" with the default driver
Creating ozone_om_1       ... done
Creating ozone_scm_1      ... done
Creating ozone_datanode_1 ... done
Creating ozone_datanode_2 ... done
Creating ozone_datanode_3 ... done
0 datanode is up and healthy (until now)
0 datanode is up and healthy (until now)
3 datanodes are up and registered to the scm
==============================================================================
auditparser                                                                   
==============================================================================
auditparser.Auditparser :: Smoketest ozone cluster startup                    
==============================================================================
Initiating freon to generate data                                     | FAIL |
255 != 0
------------------------------------------------------------------------------
Testing audit parser                                                  | FAIL |
255 != 0
------------------------------------------------------------------------------
auditparser.Auditparser :: Smoketest ozone cluster startup            | FAIL |
2 critical tests, 0 passed, 2 failed
2 tests total, 0 passed, 2 failed
==============================================================================
auditparser                                                           | FAIL |
2 critical tests, 0 passed, 2 failed
2 tests total, 0 passed, 2 failed
==============================================================================
Output:  /opt/hadoop/compose/ozone/result/robot-ozone-auditparser-om.xml
==============================================================================
basic :: Smoketest ozone cluster startup                                      
==============================================================================
Check webui static resources                                          | PASS |
------------------------------------------------------------------------------
Start freon testing                                                   | FAIL |
255 != 0
------------------------------------------------------------------------------
basic :: Smoketest ozone cluster startup                              | FAIL |
2 critical tests, 1 passed, 1 failed
2 tests total, 1 passed, 1 failed
==============================================================================
Output:  /opt/hadoop/compose/ozone/result/robot-ozone-basic-scm.xml
Stopping ozone_datanode_3 ... done
Stopping ozone_datanode_2 ... done
Stopping ozone_datanode_1 ... done
Stopping ozone_scm_1      ... done
Stopping ozone_om_1       ... done
Removing ozone_datanode_3 ... done
Removing ozone_datanode_2 ... done
Removing ozone_datanode_1 ... done
Removing ozone_scm_1      ... done
Removing ozone_om_1       ... done
Removing network ozone_default
Log:     /opt/hadoop/compose/ozone/result/log.html
Report:  /opt/hadoop/compose/ozone/result/report.html
[eyang@localhost ozone]$ ls -l result/
total 2136
-rw-rw-r--. 1 eyang eyang 1268078 Jun 12 19:19 docker-ozone-basic-scm.log
-rw-r--r--. 1 test  users  239040 Jun 12 19:19 log.html
-rw-r--r--. 1 test  users  230011 Jun 12 19:19 report.html
-rw-r--r--. 1 test  users  220539 Jun 12 19:19 robot-ozone-auditparser-om.xml
-rw-r--r--. 1 test  users  220284 Jun 12 19:19 robot-ozone-basic-scm.xml
{code}

> Remove hard coded uid from Ozone docker image
> ---------------------------------------------
>
>                 Key: HDDS-1609
>                 URL: https://issues.apache.org/jira/browse/HDDS-1609
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: Eric Yang
>            Priority: Major
>             Fix For: 0.5.0
>
>
> Hadoop-runner image is hard coded to [USER 
> hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45]
>  and user hadoop is hard coded to uid 1000.  This arrangement complicates 
> development environment where host user is different uid from 1000.  External 
> bind mount locations are written data as uid 1000.  This can prevent 
> development environment from clean up test data.  
> Docker documentation stated that "The best way to prevent 
> privilege-escalation attacks from within a container is to configure your 
> container’s applications to run as unprivileged users."  From Ozone 
> architecture point of view, there is no reason to run Ozone daemon to require 
> privileged user or hard coded user.
> h3. Solution 1
> It would be best to support running docker container as host user to reduce 
> friction.  User should be able to run:
> {code}
> docker run -u $(id -u):$(id -g) ...
> {code}
> or in docker-compose file:
> {code}
> user: "${UID}:${GID}"
> {code}
> By doing this, the user will be name less in docker container.  Some commands 
> may warn that user does not have a name.  This can be resolved by mounting 
> /etc/passwd or a file that looks like /etc/passwd that contain host user 
> entry.
> h3. Solution 2
> Move the hard coded user to range < 200.  The default linux profile reserves 
> service users < 200 to have umask that keep data private to service user or 
> group writable, if service shares group with other service users.  Register 
> the service user with Linux vendors to ensure that there is a reserved uid 
> for Hadoop user or pick one that works for Hadoop.  This is a longer route to 
> pursuit, and may not be fruitful.  
> h3. Solution 3
> Default the docker image to have sssd client installed.  This will allow 
> docker image to see host level names by binding sssd socket.  The instruction 
> for doing this is located at in [Hadoop website| 
> https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container].
> The pre-requisites for this approach will require the host level system to 
> have sssd installed.  For production system, there is a 99% chance that sssd 
> is installed.
> We may want to support combined solution of 1 and 3 to be proper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to