[ https://issues.apache.org/jira/browse/HDDS-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16865572#comment-16865572 ]
Elek, Marton commented on HDDS-1609: ------------------------------------ bq. Can you please share the content of your result folder after the test? 1. Can you please share the detailed results of the test execution? (Usually ./results folder). I am interested about the report.html and log.html 2. "but it doesn't address the privilege escalation problem" Can you please share how can I reproduce the "privilege escalation" problem? (steps, current result, expected result) Until now you shared that the smoketests are failing in your machine. I don't have any details, exceptions, etc. I don't know anything about the root cause. But I would be very happy to help to debug the issue. Didn't understand the "privilege escalation" issue until now. > Remove hard coded uid from Ozone docker image > --------------------------------------------- > > Key: HDDS-1609 > URL: https://issues.apache.org/jira/browse/HDDS-1609 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Reporter: Eric Yang > Priority: Major > Fix For: 0.5.0 > > Attachments: linux.txt, osx.txt > > > Hadoop-runner image is hard coded to [USER > hadoop|https://github.com/apache/hadoop/blob/docker-hadoop-runner-jdk11/Dockerfile#L45] > and user hadoop is hard coded to uid 1000. This arrangement complicates > development environment where host user is different uid from 1000. External > bind mount locations are written data as uid 1000. This can prevent > development environment from clean up test data. > Docker documentation stated that "The best way to prevent > privilege-escalation attacks from within a container is to configure your > container’s applications to run as unprivileged users." From Ozone > architecture point of view, there is no reason to run Ozone daemon to require > privileged user or hard coded user. > h3. Solution 1 > It would be best to support running docker container as host user to reduce > friction. User should be able to run: > {code} > docker run -u $(id -u):$(id -g) ... > {code} > or in docker-compose file: > {code} > user: "${UID}:${GID}" > {code} > By doing this, the user will be name less in docker container. Some commands > may warn that user does not have a name. This can be resolved by mounting > /etc/passwd or a file that looks like /etc/passwd that contain host user > entry. > h3. Solution 2 > Move the hard coded user to range < 200. The default linux profile reserves > service users < 200 to have umask that keep data private to service user or > group writable, if service shares group with other service users. Register > the service user with Linux vendors to ensure that there is a reserved uid > for Hadoop user or pick one that works for Hadoop. This is a longer route to > pursuit, and may not be fruitful. > h3. Solution 3 > Default the docker image to have sssd client installed. This will allow > docker image to see host level names by binding sssd socket. The instruction > for doing this is located at in [Hadoop website| > https://hadoop.apache.org/docs/r3.1.2/hadoop-yarn/hadoop-yarn-site/DockerContainers.html#User_Management_in_Docker_Container]. > The pre-requisites for this approach will require the host level system to > have sssd installed. For production system, there is a 99% chance that sssd > is installed. > We may want to support combined solution of 1 and 3 to be proper. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org