[ https://issues.apache.org/jira/browse/HADOOP-18235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
lujie updated HADOOP-18235: --------------------------- Description: Currently, we implement flush like: {code:java} // public void flush() throws IOException { super.flush(); if (LOG.isDebugEnabled()) { LOG.debug("Resetting permissions to '" + permissions + "'"); } if (!Shell.WINDOWS) { Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), permissions); } else { // FsPermission expects a 10-character string because of the leading // directory indicator, i.e. "drwx------". The JDK toString method returns // a 9-character string, so prepend a leading character. FsPermission fsPermission = FsPermission.valueOf( "-" + PosixFilePermissions.toString(permissions)); FileUtil.setPermission(file, fsPermission); } } {code} we wirite the Credential first, then set permission. The correct order is setPermission first, then write Credential . Otherswise, we may leak Credential . For example, the origin perms of file is 755(default on linux), when the Credential is flushed, Credential can be leaked when 1) in a short time window, others have a chance to access the file. 2) node crash and reboot, the file permission is 755 for ever before we run the CredentialShell again. was: Currently, we implement flush like: {code:java} // public void flush() throws IOException { super.flush(); if (LOG.isDebugEnabled()) { LOG.debug("Resetting permissions to '" + permissions + "'"); } if (!Shell.WINDOWS) { Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), permissions); } else { // FsPermission expects a 10-character string because of the leading // directory indicator, i.e. "drwx------". The JDK toString method returns // a 9-character string, so prepend a leading character. FsPermission fsPermission = FsPermission.valueOf( "-" + PosixFilePermissions.toString(permissions)); FileUtil.setPermission(file, fsPermission); } } {code} we wirite the Credential first, then set permission. The correct order is setPermission first, then write Credential . Otherswise, we may leak Credential . For example, the origin perms of file is 755(default on linux), when the Credential is flushed. 1) in a short time window, others have a chance to access the file. 2) node crash and reboot, the file permission is 755 for ever before we run the CredentialShell again. > vulnerability: we may leak sensitive information in LocalKeyStoreProvider > -------------------------------------------------------------------------- > > Key: HADOOP-18235 > URL: https://issues.apache.org/jira/browse/HADOOP-18235 > Project: Hadoop Common > Issue Type: Bug > Reporter: lujie > Priority: Major > > Currently, we implement flush like: > {code:java} > // public void flush() throws IOException { > super.flush(); > if (LOG.isDebugEnabled()) { > LOG.debug("Resetting permissions to '" + permissions + "'"); > } > if (!Shell.WINDOWS) { > Files.setPosixFilePermissions(Paths.get(file.getCanonicalPath()), > permissions); > } else { > // FsPermission expects a 10-character string because of the leading > // directory indicator, i.e. "drwx------". The JDK toString method > returns > // a 9-character string, so prepend a leading character. > FsPermission fsPermission = FsPermission.valueOf( > "-" + PosixFilePermissions.toString(permissions)); > FileUtil.setPermission(file, fsPermission); > } > } {code} > we wirite the Credential first, then set permission. > The correct order is setPermission first, then write Credential . > Otherswise, we may leak Credential . For example, the origin perms of file is > 755(default on linux), when the Credential is flushed, Credential can be > leaked when > > 1) in a short time window, others have a chance to access the file. > 2) node crash and reboot, the file permission is 755 for ever before we run > the CredentialShell again. > -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org