[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17713111#comment-17713111
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-----------------------------------------

hadoop-yetus commented on PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#issuecomment-1511439932

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 105m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5560 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 785a46dea41b 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c093fe1297cb91d261e100aa9c898ffe3de4d983 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/testReport/ |
   | Max. process+thread count | 535 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-18705
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18705
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: tools
>    Affects Versions: 3.4.0
>            Reporter: Tamas Domok
>            Assignee: Tamas Domok
>            Priority: Major
>              Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
>         this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
>         delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
>     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
>     at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
>     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
>     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>     at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
>     at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:85)
>     at 
> org.apache.hadoop.security.alias.KeyStoreProvider.<init>(KeyStoreProvider.java:49)
>     at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:42)
>     at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:35)
>     at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
>     at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
>     at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
>     at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
>     at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
>     at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
>     at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.<init>(AbstractIDBClient.java:139)
>     at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.<init>(AbfsIDBClient.java:74)
>     at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
>     at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
>     at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>     at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
>     at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
>     at 
> org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
>     at 
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
>     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
>     at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
>     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
>     at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.getRollOverLogMaxSize(LogAggregationIndexedFileController.java:1164)
>     at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.initInternal(LogAggregationIndexedFileController.java:149)
>     at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileController.initialize(LogAggregationFileController.java:138)
>     at 
> org.apache.hadoop.yarn.logaggregation.filecontroller.LogAggregationFileControllerFactory.<init>(LogAggregationFileControllerFactory.java:77)
>     at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.addLogAggregationDelegationToken(YarnClientImpl.java:405)
>     at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:321)
>     at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:303)
>     at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:331)
>     at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:252)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1576)
>     at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1573)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1573)
>     at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1594)
>     at org.apache.hadoop.examples.RandomWriter.run(RandomWriter.java:282)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
>     at org.apache.hadoop.examples.RandomWriter.main(RandomWriter.java:293)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>     at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
>     at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to