[ 
https://issues.apache.org/jira/browse/HBASE-14605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14975292#comment-14975292
 ] 

Andrew Purtell commented on HBASE-14605:
----------------------------------------

{quote}
bq. This change breaks Phoenix LocalIndexSplitter because of the API param 
change in SplitTransaction#stepsAfterPONR.

stepsAfterPONR is not in SplitTransaction. It is in SplitTransactionImpl which 
is marked @InterfaceAudience.Private
I think Phoenix should adjust LocalIndexSplitter for the new API.
{quote}

Yep, it's on Phoenix to move to the new APIs we made precisely for support for 
local indexes or this is going to keep happening. We've said this enough now. 
HBase needs to evolve. Private APIs provide no compat guarantees by definition. 
 [~lhofhansl] [~jamestaylor] [~rajeshbabu]

> Split fails due to 'No valid credentials' error when 
> SecureBulkLoadEndpoint#start tries to access hdfs
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14605
>                 URL: https://issues.apache.org/jira/browse/HBASE-14605
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>             Fix For: 2.0.0, 1.2.0, 1.3.0, 1.0.3, 1.1.3, 0.98.16
>
>         Attachments: 144605-branch-1-v3.txt, 14605-0.98-v5.txt, 
> 14605-branch-1-v4.txt, 14605-branch-1-v5.txt, 14605-branch-1.0-v5.txt, 
> 14605-v1.txt, 14605-v2.txt, 14605-v3.txt, 14605-v3.txt, 14605-v3.txt, 
> 14605-v4.txt, 14605-v5.txt, 14605.alt
>
>
> During recent testing in secure cluster (with HBASE-14475), we found the 
> following when user X (non-super user) split a table with region replica:
> {code}
> 2015-10-12 10:58:18,955 ERROR [FifoRpcScheduler.handler1-thread-9] 
> master.HMaster: Region server hbase-4-4.novalocal,60020,1444645588137 
> reported a fatal error:
> ABORTING region server hbase-4-4.novalocal,60020,1444645588137: The 
> coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint 
> threw an unexpected   exception
> Cause:
> java.lang.IllegalStateException: Failed to get FileSystem instance
>   at 
> org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint.start(SecureBulkLoadEndpoint.java:148)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.startup(CoprocessorHost.java:415)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:257)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:160)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.<init>(RegionCoprocessorHost.java:192)
>   at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:701)
>   at org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:608)
> ...
> Caused by: java.io.IOException: Failed on local exception: 
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid          credentials provided (Mechanism 
> level: Failed to find any Kerberos tgt)]; Host Details : local host is: 
> "hbase-4-4/172.22.66.186"; destination host is: "os-r6-      
> okarus-hbase-4-2.novalocal":8020;
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1473)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1400)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>   at com.sun.proxy.$Proxy18.mkdirs(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:555)
>   at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy19.mkdirs(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2775)
>   at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2746)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:967)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:963)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> {code}
> The cause was that SecureBulkLoadEndpoint#start tried to create staging dir 
> in hdfs as user X but didn't pass authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to