[ https://issues.apache.org/jira/browse/HADOOP-12846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15170736#comment-15170736 ]
Chris Nauroth commented on HADOOP-12846: ---------------------------------------- Hi [~lmccay]. This looks great overall! I have a few small comments. {code} public static Configuration excludeIncompatibleCredentialProviders( Configuration config, Class fileSystemClass) throws IOException { ... Class<?> clazz = null; try { String scheme = path.toUri().getScheme(); clazz = FileSystem.getFileSystemClass(scheme, config); ... {code} I think we can improve type safety in the above code by changing the type of {{fileSystemClass}} and {{clazz}} to {{Class<? extends FileSystem>}}. The call to {{FileSystem#getFileSystemClass}} always returns {{Class<? extends FileSystem>}}. It would never be valid to call this method and pass a {{Class}} that doesn't satisfy that type constraint. With the suggested change, that would be enforced at compile time. {code} LOG.warn("Filesystem based provider" + " excluded from provider path due to recursive dependency: " + provider); {code} I'd like to suggest logging this at debug level instead of warn. In some deployments, this might not be an unusual condition. If something like Sqoop uses a credential provider for a database password, then logging this at warn could potentially display it to the console on every Sqoop invocation, which wouldn't be a good user experience. {code} } catch (URISyntaxException e) { LOG.warn("Credential Provider URI is invalid." + provider); } {code} Is logging a warning and trying to proceed the right thing to do here, or should a bad URI cause an abort by throwing an exception? Do you want to update S3A to use the new {{ProviderUtils}} method too? > Credential Provider Recursive Dependencies > ------------------------------------------ > > Key: HADOOP-12846 > URL: https://issues.apache.org/jira/browse/HADOOP-12846 > Project: Hadoop Common > Issue Type: Bug > Reporter: Larry McCay > Assignee: Larry McCay > Attachments: HADOOP-12846-001.patch, HADOOP-12846-002.patch, > HADOOP-12846-003.patch > > > There are a few credential provider integration points in which the use of a > certain type of provider in a certain filesystem causes a recursive infinite > loop. > For instance, a component such as sqoop can be protecting a db password in a > credential provider within the wasb/azure filesystem. Now that HADOOP-12555 > has introduced the ability to protect the access keys for wasb we suddenly > need access to wasb to get the database keys which initiates the attempt to > get the access keys from wasb - since there is a provider path configured for > sqoop. > For such integrations, those in which it doesn't make sense to protect the > access keys inside the thing that we need the keys to access, we need a > solution to avoid this recursion - other than dictating what filesystems can > be used by other components. > This patch proposes the ability to scrub the configured provider path of any > provider types that would be incompatible with the integration point. In > other words, before calling Configuration.getPassword for the access keys to > wasb, we can remove any configured providers that require access to wasb. > This will require some regex expressions that can be used to identify the > configuration of such provider uri's within the provider path parameter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)