Hi Matt!

Take a look at the mapreduce.jobhistory.* configuration parameters here for the 
delay in moving finished jobs to the 
HistoryServer:https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml

I've seen this error "hadoop is not allowed to impersonate hadoop" when I tried 
configuring hadoop proxy users
 

     On Friday, January 23, 2015 10:43 AM, Matt K <matvey1...@gmail.com> wrote:
   

 Hello,
I am an issue with Yarn's JobHistory Server, which is making it painful to 
debug jobs. The latest jobs (from the last 12 hours or so) are missing from the 
JobHistory Server, but present in ResourceManager Yarn UI. I am seeing 8 jobs 
only in the JobHistory, and 15 in Yarn UI.
Not much useful stuff in the logs. Every few hours, this exception pops up in 
mapred-hadoop-historyserver.log, but I don't know if it's related.
2015-01-23 03:41:40,003 WARN 
org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService: Could not process job 
filesorg.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 User: hadoop is not allowed to impersonate hadoop        at 
org.apache.hadoop.ipc.Client.call(Client.java:1409)        at 
org.apache.hadoop.ipc.Client.call(Client.java:1362)        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)        at 
sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy9.getBlockLocations(Unknown Source)        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:219)
        at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1137)     
   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1127)    
    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1117)   
     at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:264)
        at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:231)        
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:224)        
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1290)        at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:296)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:764)        at 
org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.buildJobIndexInfo(KilledHistoryService.java:196)
        at 
org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.access$100(KilledHistoryService.java:85)
        at 
org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:128)
        at 
org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:125)
        at java.security.AccessController.doPrivileged(Native Method)        at 
javax.security.auth.Subject.doAs(Subject.java:415)        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at 
org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.run(KilledHistoryService.java:125)
        at java.util.TimerThread.mainLoop(Timer.java:555)        at 
java.util.TimerThread.run(Timer.java:505)
Has anyone ran into this before?
Thanks,-Matt

   

Reply via email to