[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836365#comment-17836365 ] ASF GitHub Bot commented on HADOOP-18235: - hadoop-yetus commented on PR #4998: URL: https://github.com/apache/hadoop/pull/4998#issuecomment-2050626727 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 55s | | trunk passed | | +1 :green_heart: | compile | 9m 1s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 8m 17s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 43s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 56s | | trunk passed | | +1 :green_heart: | javadoc | 0m 45s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 30s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 8m 33s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 8m 33s | | the patch passed | | +1 :green_heart: | compile | 8m 9s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 8m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 55s | | the patch passed | | +1 :green_heart: | javadoc | 0m 43s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | -1 :x: | spotbugs | 1m 40s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 21m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 3s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 146m 57s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Exceptional return value of java.io.File.createNewFile() ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:[line 147] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4998 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8a78c1cdd11e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5d045909b32ff03a576e18822b4235a5c6dc07bf | | Default Java | Private
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721613#comment-17721613 ] ASF GitHub Bot commented on HADOOP-18235: - saxenapranav commented on code in PR #4998: URL: https://github.com/apache/hadoop/pull/4998#discussion_r1190612252 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java: ## @@ -142,20 +142,26 @@ protected void initFileSystem(URI uri) @Override public void flush() throws IOException { -super.flush(); -if (LOG.isDebugEnabled()) { - LOG.debug("Resetting permissions to '" + permissions + "'"); -} -if (!Shell.WINDOWS) { - Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), - permissions); -} else { - // FsPermission expects a 10-character string because of the leading - // directory indicator, i.e. "drwx--". The JDK toString method returns - // a 9-character string, so prepend a leading character. - FsPermission fsPermission = FsPermission.valueOf( - "-" + PosixFilePermissions.toString(permissions)); - FileUtil.setPermission(file, fsPermission); +super.getWriteLock().lock(); +try { + file.createNewFile(); + if (LOG.isDebugEnabled()) { +LOG.debug("Resetting permissions to '" + permissions + "'"); + } + if (!Shell.WINDOWS) { +Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), +permissions); + } else { +// FsPermission expects a 10-character string because of the leading +// directory indicator, i.e. "drwx--". The JDK toString method returns +// a 9-character string, so prepend a leading character. +FsPermission fsPermission = FsPermission.valueOf( +"-" + PosixFilePermissions.toString(permissions)); +FileUtil.setPermission(file, fsPermission); + } Review Comment: I mean to say is what if some other process writes into the file between `file.createNewFile()` and `FileUtil.setPermission(file, fsPermission);`. In that case, the file would be having corrupted data. Kindly correct me if it looks wrong. Thanks. @arp7 > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721517#comment-17721517 ] ASF GitHub Bot commented on HADOOP-18235: - arp7 commented on code in PR #4998: URL: https://github.com/apache/hadoop/pull/4998#discussion_r1190300509 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java: ## @@ -142,20 +142,26 @@ protected void initFileSystem(URI uri) @Override public void flush() throws IOException { -super.flush(); -if (LOG.isDebugEnabled()) { - LOG.debug("Resetting permissions to '" + permissions + "'"); -} -if (!Shell.WINDOWS) { - Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), - permissions); -} else { - // FsPermission expects a 10-character string because of the leading - // directory indicator, i.e. "drwx--". The JDK toString method returns - // a 9-character string, so prepend a leading character. - FsPermission fsPermission = FsPermission.valueOf( - "-" + PosixFilePermissions.toString(permissions)); - FileUtil.setPermission(file, fsPermission); +super.getWriteLock().lock(); +try { + file.createNewFile(); + if (LOG.isDebugEnabled()) { +LOG.debug("Resetting permissions to '" + permissions + "'"); + } + if (!Shell.WINDOWS) { +Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), +permissions); + } else { +// FsPermission expects a 10-character string because of the leading +// directory indicator, i.e. "drwx--". The JDK toString method returns +// a 9-character string, so prepend a leading character. +FsPermission fsPermission = FsPermission.valueOf( +"-" + PosixFilePermissions.toString(permissions)); +FileUtil.setPermission(file, fsPermission); + } Review Comment: @saxenapranav I don't believe this is an issue. If this process has successfully got a write handle then it is assumed no one else is actively writing to the file. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692040#comment-17692040 ] ASF GitHub Bot commented on HADOOP-18235: - pranavsaxena-microsoft commented on code in PR #4998: URL: https://github.com/apache/hadoop/pull/4998#discussion_r1114022360 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java: ## @@ -142,20 +142,26 @@ protected void initFileSystem(URI uri) @Override public void flush() throws IOException { -super.flush(); -if (LOG.isDebugEnabled()) { - LOG.debug("Resetting permissions to '" + permissions + "'"); -} -if (!Shell.WINDOWS) { - Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), - permissions); -} else { - // FsPermission expects a 10-character string because of the leading - // directory indicator, i.e. "drwx--". The JDK toString method returns - // a 9-character string, so prepend a leading character. - FsPermission fsPermission = FsPermission.valueOf( - "-" + PosixFilePermissions.toString(permissions)); - FileUtil.setPermission(file, fsPermission); +super.getWriteLock().lock(); +try { + file.createNewFile(); + if (LOG.isDebugEnabled()) { +LOG.debug("Resetting permissions to '" + permissions + "'"); + } + if (!Shell.WINDOWS) { +Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), +permissions); + } else { +// FsPermission expects a 10-character string because of the leading +// directory indicator, i.e. "drwx--". The JDK toString method returns +// a 9-character string, so prepend a leading character. +FsPermission fsPermission = FsPermission.valueOf( +"-" + PosixFilePermissions.toString(permissions)); +FileUtil.setPermission(file, fsPermission); + } Review Comment: In the method getOutputStreamForKeystore(), before sending outputStream, should it be checked that the file is empty. Reason being, between creatingFile and setting permissions, there could be that some other process puts something in the file. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17682223#comment-17682223 ] ASF GitHub Bot commented on HADOOP-18235: - steveloughran commented on code in PR #4998: URL: https://github.com/apache/hadoop/pull/4998#discussion_r1090918223 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java: ## @@ -142,20 +142,26 @@ protected void initFileSystem(URI uri) @Override public void flush() throws IOException { -super.flush(); -if (LOG.isDebugEnabled()) { - LOG.debug("Resetting permissions to '" + permissions + "'"); -} -if (!Shell.WINDOWS) { - Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), - permissions); -} else { - // FsPermission expects a 10-character string because of the leading - // directory indicator, i.e. "drwx--". The JDK toString method returns - // a 9-character string, so prepend a leading character. - FsPermission fsPermission = FsPermission.valueOf( - "-" + PosixFilePermissions.toString(permissions)); - FileUtil.setPermission(file, fsPermission); +super.getWriteLock().lock(); Review Comment: no need for the super. prefix here but: we do now require lock() to be reentrant > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17682195#comment-17682195 ] ASF GitHub Bot commented on HADOOP-18235: - steveloughran commented on code in PR #4998: URL: https://github.com/apache/hadoop/pull/4998#discussion_r1090918223 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java: ## @@ -142,20 +142,26 @@ protected void initFileSystem(URI uri) @Override public void flush() throws IOException { -super.flush(); -if (LOG.isDebugEnabled()) { - LOG.debug("Resetting permissions to '" + permissions + "'"); -} -if (!Shell.WINDOWS) { - Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), - permissions); -} else { - // FsPermission expects a 10-character string because of the leading - // directory indicator, i.e. "drwx--". The JDK toString method returns - // a 9-character string, so prepend a leading character. - FsPermission fsPermission = FsPermission.valueOf( - "-" + PosixFilePermissions.toString(permissions)); - FileUtil.setPermission(file, fsPermission); +super.getWriteLock().lock(); Review Comment: no need for the super. prefix here > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678275#comment-17678275 ] ASF GitHub Bot commented on HADOOP-18235: - steveloughran commented on PR #4998: URL: https://github.com/apache/hadoop/pull/4998#issuecomment-1387207281 where are we with this patch? can/should we get it into 3.3.5 > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17616060#comment-17616060 ] ASF GitHub Bot commented on HADOOP-18235: - hadoop-yetus commented on PR #4998: URL: https://github.com/apache/hadoop/pull/4998#issuecomment-1275217772 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 2s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 8s | | trunk passed | | +1 :green_heart: | compile | 25m 35s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 22m 1s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 54s | | trunk passed | | +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 6s | | the patch passed | | +1 :green_heart: | compile | 24m 51s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 24m 51s | | the patch passed | | +1 :green_heart: | compile | 22m 6s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 52s | | the patch passed | | +1 :green_heart: | javadoc | 1m 16s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 57s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 3m 1s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/2/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 26m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 34s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 10s | | The patch does not generate ASF License warnings. | | | | 228m 23s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Exceptional return value of java.io.File.createNewFile() ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:[line 147] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4998 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 9019c7bbd85e 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5d045909b32ff03a576e18822b4235a5c6dc07bf | | Default Java | Private
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615991#comment-17615991 ] Larry McCay commented on HADOOP-18235: -- [~clayb] - I do not recall what the issue was - it was 7+ years ago, I think. :) > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615954#comment-17615954 ] Clay B. commented on HADOOP-18235: -- [~ste...@apache.org] and [~lmccay] Thank you for the direction here. For reference, my thoughts for testing was to try this in an environment as it looked like was done in HADOOP-11934. Should that be a reasonable test? It looks involved so I expect that to take me a few days of tinkering which I'll update this JIRA with. [~lmccay] you say you had an issue with the proposed ordering on the GitHub review. To confirm you mean trying the touch, chmod before write? If so, did you find exceptions when working on HADOOP-11934 or recall the general issue type I should be on the watch for? > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615947#comment-17615947 ] ASF GitHub Bot commented on HADOOP-18235: - cbaenziger commented on code in PR #4998: URL: https://github.com/apache/hadoop/pull/4998#discussion_r992538793 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java: ## @@ -142,21 +142,27 @@ protected void initFileSystem(URI uri) @Override public void flush() throws IOException { -super.flush(); -if (LOG.isDebugEnabled()) { - LOG.debug("Resetting permissions to '" + permissions + "'"); -} -if (!Shell.WINDOWS) { - Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), - permissions); -} else { - // FsPermission expects a 10-character string because of the leading - // directory indicator, i.e. "drwx--". The JDK toString method returns - // a 9-character string, so prepend a leading character. - FsPermission fsPermission = FsPermission.valueOf( - "-" + PosixFilePermissions.toString(permissions)); - FileUtil.setPermission(file, fsPermission); +try { + super.getWriteLock().lock(); + file.createNewFile(); + if (LOG.isDebugEnabled()) { +LOG.debug("Resetting permissions to '" + permissions + "'"); + } + if (!Shell.WINDOWS) { +Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), +permissions); + } else { +// FsPermission expects a 10-character string because of the leading +// directory indicator, i.e. "drwx--". The JDK toString method returns +// a 9-character string, so prepend a leading character. +FsPermission fsPermission = FsPermission.valueOf( +"-" + PosixFilePermissions.toString(permissions)); +FileUtil.setPermission(file, fsPermission); + } +} finally { + super.getWriteLock().unlock(); } +super.flush(); Review Comment: ## My initial assumptions were: If we fail to set permissions, I would expect an `IOError` or the like to interrupt execution and for us to not get to the flush call. I am a bit novice to Java ReadWriteLocks, and was thinking we could deadlock if we do not release the write lock as flush in the [super class](https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java#L285) also attempts to acquire the writeLock. Lastly, I was thinking nothing in Hadoop which would be setting more permissive permissions on this file. ## My updated understanding thanks to your question: I think the locks are handled per-thread though and not per-scope reading the [JavaDocs](https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html) for ReentrantReadWriteLock. And it appears one can acquire a write lock multiple times, so I think flush can move inside the initial lock. Further, in reading the JavaDocs and other uses in Hadoop, it looks like the write lock acquisition should be outside the try/finally block. Code updated as such. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2)
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615368#comment-17615368 ] ASF GitHub Bot commented on HADOOP-18235: - hadoop-yetus commented on PR #4998: URL: https://github.com/apache/hadoop/pull/4998#issuecomment-1273856186 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 24s | | trunk passed | | +1 :green_heart: | compile | 23m 22s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 20m 51s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 55s | | trunk passed | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 56s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 22m 47s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 22m 47s | | the patch passed | | +1 :green_heart: | compile | 20m 58s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 20m 58s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 30s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 0s | | the patch passed | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | spotbugs | 2m 54s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 23m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 58s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 15s | | The patch does not generate ASF License warnings. | | | | 215m 22s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Exceptional return value of java.io.File.createNewFile() ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:ignored in org.apache.hadoop.security.alias.LocalKeyStoreProvider.flush() At LocalKeyStoreProvider.java:[line 147] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4998/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4998 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 44ecefdc1706 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b408f76d010b613e58b4f68d33c4e6d3149dc78f | | Default Java | Private
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615271#comment-17615271 ] Larry McCay commented on HADOOP-18235: -- [~clayb] - thanks for the PR here! I added a review comment on the PR itself. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615270#comment-17615270 ] ASF GitHub Bot commented on HADOOP-18235: - lmccay commented on code in PR #4998: URL: https://github.com/apache/hadoop/pull/4998#discussion_r991563102 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalKeyStoreProvider.java: ## @@ -142,21 +142,27 @@ protected void initFileSystem(URI uri) @Override public void flush() throws IOException { -super.flush(); -if (LOG.isDebugEnabled()) { - LOG.debug("Resetting permissions to '" + permissions + "'"); -} -if (!Shell.WINDOWS) { - Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), - permissions); -} else { - // FsPermission expects a 10-character string because of the leading - // directory indicator, i.e. "drwx--". The JDK toString method returns - // a 9-character string, so prepend a leading character. - FsPermission fsPermission = FsPermission.valueOf( - "-" + PosixFilePermissions.toString(permissions)); - FileUtil.setPermission(file, fsPermission); +try { + super.getWriteLock().lock(); + file.createNewFile(); + if (LOG.isDebugEnabled()) { +LOG.debug("Resetting permissions to '" + permissions + "'"); + } + if (!Shell.WINDOWS) { +Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), +permissions); + } else { +// FsPermission expects a 10-character string because of the leading +// directory indicator, i.e. "drwx--". The JDK toString method returns +// a 9-character string, so prepend a leading character. +FsPermission fsPermission = FsPermission.valueOf( +"-" + PosixFilePermissions.toString(permissions)); +FileUtil.setPermission(file, fsPermission); + } +} finally { + super.getWriteLock().unlock(); } +super.flush(); Review Comment: Should this be inside the try block if we are attempting to not write to open permission keystore? I think this would currently not address your #2 concern with CredentialShell or node failure during the set permission. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615262#comment-17615262 ] Clay B. commented on HADOOP-18235: -- Thanks for taking a look [~ste...@apache.org] and saving me from trying to boil the testing ocean for this. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > Labels: pull-request-available > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615257#comment-17615257 ] ASF GitHub Bot commented on HADOOP-18235: - cbaenziger opened a new pull request, #4998: URL: https://github.com/apache/hadoop/pull/4998 ### Description of PR It is to ensure we have a file and have set permissions on the file before writing out data. I simply worked to rearrange the current logic and was unaware if there may be a better pattern to follow else where in Hadoop. ### How was this patch tested? This is an untested PR. I have merely verified it builds. ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [N/A] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [N/A] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [N/A] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17615083#comment-17615083 ] Steve Loughran commented on HADOOP-18235: - create a github PR for your work. coming up with a test for this is probably impossible; we will have to rely on review and the existing tests > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614935#comment-17614935 ] lujie commented on HADOOP-18235: Hi: [~clayb] Thanks for fixing this bug. It looks good. I also want to give another reason that we need to fix this bug. Now the file is created by getOutputStreamForKeystore. The permission of new file is afftected by umask. Assume that umask is 277, then the permission can 500 and we could not write the file. Yours patch can avoid such problem. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18235) vulnerability: we may leak sensitive information in LocalKeyStoreProvider
[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17614831#comment-17614831 ] Clay B. commented on HADOOP-18235: -- I have what I think is s fix at https://github.com/cbaenziger/hadoop/tree/HADOOP-18235 however I'm trying to find a way to test it. I see this code came about from HADOOP-11934 so maybe I can test it in some similar way. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug >Reporter: lujie >Assignee: Clay B. >Priority: Critical > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx--". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1)between flush and setPermission, others have a chance to access the file. > 2) CredentialShell(or the machine node ) crash between flush and > setPermission, the file permission is 755 for ever before we run the > CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org