[ 
https://issues.apache.org/jira/browse/HDFS-14117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16815018#comment-16815018
 ] 

CR Hota commented on HDFS-14117:
--------------------------------

[~elgoiri] Thanks for reporting the test failure.

I tried it reproduce it locally and could not. However upon multiple executions 
of another TestRouterHttpDelegationToken I did reproduce this once locally.
{code:java}
[INFO] -------------------------------------------------------
[INFO]  T E S T S
[INFO] -------------------------------------------------------
[INFO] Running 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken
[ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.076 s 
<<< FAILURE! - in 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken
[ERROR] 
testGetDelegationToken(org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken)
  Time elapsed: 0.115 s  <<< ERROR!
org.apache.hadoop.service.ServiceStateException: 
org.apache.hadoop.security.KerberosAuthException: failure to login: for 
principal: router/localh...@example.com from keytab 
/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
 javax.security.auth.login.LoginException: Integrity check on decrypted field 
failed (31) - PREAUTH_FAILED
        at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
        at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:173)
        at 
org.apache.hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken.setup(TestRouterHttpDelegationToken.java:99)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
        at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
        at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
        at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
        at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
        at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
        at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
        at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
        at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
        at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
        at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
        at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: org.apache.hadoop.security.KerberosAuthException: failure to login: 
for principal: router/localh...@example.com from keytab 
/hadoop/hadoop-hdfs-project/hadoop-hdfs-rbf/target/test/data/SecurityConfUtil/test.keytab
 javax.security.auth.login.LoginException: Integrity check on decrypted field 
failed (31) - PREAUTH_FAILED
        at 
org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:2008)
        at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1376)
        at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1156)
        at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
        at 
org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:159)
        at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
        ... 27 more

{code}
 

Looks to me like some intermittent issue with the way MiniKDC is spawned and 
principal added before login is initiated. Haven't got the time to find the 
root cause. For now, I will create a Jira and put it out to take a look. If 
this occurs in the future, we can disable these 2 tests till we find the root 
cause.

> RBF: We can only delete the files or dirs of one subcluster in a cluster with 
> multiple subclusters when trash is enabled
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-14117
>                 URL: https://issues.apache.org/jira/browse/HDFS-14117
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: venkata ramkumar
>            Assignee: venkata ramkumar
>            Priority: Major
>              Labels: RBF
>         Attachments: HDFS-14117-HDFS-13891.001.patch, 
> HDFS-14117-HDFS-13891.002.patch, HDFS-14117-HDFS-13891.003.patch, 
> HDFS-14117-HDFS-13891.004.patch, HDFS-14117-HDFS-13891.005.patch, 
> HDFS-14117-HDFS-13891.006.patch, HDFS-14117-HDFS-13891.007.patch, 
> HDFS-14117-HDFS-13891.008.patch, HDFS-14117-HDFS-13891.009.patch, 
> HDFS-14117-HDFS-13891.010.patch, HDFS-14117-HDFS-13891.011.patch, 
> HDFS-14117.001.patch, HDFS-14117.002.patch, HDFS-14117.003.patch, 
> HDFS-14117.004.patch, HDFS-14117.005.patch
>
>
> When we delete files or dirs in hdfs, it will move the deleted files or dirs 
> to trash by default.
> But in the global path we can only mount one trash dir /user. So we mount 
> trash dir /user of the subcluster ns1 to the global path /user. Then we can 
> delete files or dirs of ns1, but when we delete the files or dirs of another 
> subcluser, such as hacluster, it will be failed.
> h1. Mount Table
> ||Global path||Target nameservice||Target path||Order||Read 
> only||Owner||Group||Permission||Quota/Usage||Date Modified||Date Created||
> |/test|hacluster2|/test| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:42|2018/11/29 14:37:42|
> |/tmp|hacluster1|/tmp| | |securedn|users|rwxr-xr-x|[NsQuota: -/-, SsQuota: 
> -/-]|2018/11/29 14:37:05|2018/11/29 14:37:05|
> |/user|hacluster2,hacluster1|/user|HASH| |securedn|users|rwxr-xr-x|[NsQuota: 
> -/-, SsQuota: -/-]|2018/11/29 14:42:37|2018/11/29 14:38:20|
> commands: 
> {noformat}
> 1./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /test/.
> 18/11/30 11:00:47 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r-- 3 securedn supergroup 8081 2018-11-30 10:56 /test/hdfs.cmd
> 2./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -ls /tmp/.
> 18/11/30 11:00:40 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-r--r--   3 securedn supergroup       6311 2018-11-30 10:57 /tmp/mapred.cmd
> 3../opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm 
> /tmp/mapred.cmd
> 18/11/30 11:01:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> rm: Failed to move to trash: hdfs://router/tmp/mapred.cmd: rename destination 
> parent /user/securedn/.Trash/Current/tmp/mapred.cmd not found.
> 4./opt/HAcluater_ram1/install/hadoop/router/bin> ./hdfs dfs -rm /test/hdfs.cmd
> 18/11/30 11:01:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/11/30 11:01:22 INFO fs.TrashPolicyDefault: Moved: 
> 'hdfs://router/test/hdfs.cmd' to trash at: 
> hdfs://router/user/securedn/.Trash/Current/test/hdfs.cmd
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to