[ 
https://issues.apache.org/jira/browse/HADOOP-17266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17197404#comment-17197404
 ] 

Hadoop QA commented on HADOOP-17266:
------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 33m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 5s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
14s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/70/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17266 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13011663/HADOOP-17266.001.patch
 |
| Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
| uname | Linux d592eb0ac313 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / ce861836918 |
|  Test Results | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/70/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/70/console |
| versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Sudo in hadoop-functions.sh should preserve environment variables 
> ------------------------------------------------------------------
>
>                 Key: HADOOP-17266
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17266
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: scripts
>    Affects Versions: 3.3.0
>            Reporter: Chengbing Liu
>            Priority: Major
>         Attachments: HADOOP-17266.001.patch
>
>
> Steps to reproduce:
> 1. Set {{HDFS_NAMENODE_USER=hdfs}} in {{/etc/default/hadoop-hdfs-namenode}} 
> to enable user check (and switch to {{hdfs}} to start/stop NameNode daemon)
> 2. Stop NameNode with: {{service hadoop-hdfs-namenode stop}}
> 3. Got an error and NameNode is not stopped
> {noformat}
> ERROR: Cannot execute /usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh.
> Failed to stop Hadoop namenode. Return value: 1. [FAILED]
> {noformat}
> The root cause is that after sudo, {{HADOOP_HOME=/usr/lib/hadoop}} is not 
> preserved, and {{/usr/lib/hadoop-hdfs/bin/hdfs}} locates libexec by the 
> following logic:
> {noformat}
> # let's locate libexec...
> if [[ -n "${HADOOP_HOME}" ]]; then
>   HADOOP_DEFAULT_LIBEXEC_DIR="${HADOOP_HOME}/libexec"
> else
>   bin=$(cd -P -- "$(dirname -- "${MYNAME}")" >/dev/null && pwd -P)
>   HADOOP_DEFAULT_LIBEXEC_DIR="${bin}/../libexec"
> fi
> {noformat}
> I believe the key point here is that we should preserve environment variables 
> when doing sudo.
> Note that this bug is not introduced by HDFS-15353, before which {{su -l}} is 
> used, which will also discard environment variables.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to