[ 
https://issues.apache.org/jira/browse/HDDS-608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-608:
----------------------------
    Description: 
# Right now only the administrators can submit a MR job. All the other users 
including hdfs will fail with below error:
{code:java}
-bash-4.2$ ./ozone sh bucket create /volume2/bucket2
2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2, 
with Versioning false and Storage Type set to DISK
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-000004.hwx.site/172.27.79.197:10200
18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0003
18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-000005.hwx.site:8088/proxy/application_1539125785626_0003/
18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in 
uber mode : false
18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:04:36 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_000000_0, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:04:42 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_000000_1, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:04:48 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_000000_2, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:04:55 INFO mapreduce.Job: map 100% reduce 100%
18/10/09 23:04:55 INFO mapreduce.Job: Job job_1539125785626_0003 failed with 
state FAILED due to: Task failed task_1539125785626_0003_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1 killedMaps:0 
killedReduces: 0

18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:55 INFO mapreduce.Job: Counters: 45
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=266353
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=215876
HDFS: Number of bytes written=0
HDFS: Number of read operations=2
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
O3: Number of bytes read=0
O3: Number of bytes written=0
O3: Number of read operations=0
O3: Number of large read operations=0
O3: Number of write operations=0
Job Counters
Failed reduce tasks=4
Launched map tasks=1
Launched reduce tasks=4
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=21156
Total time spent by all reduces in occupied slots (ms)=117416
Total time spent by all map tasks (ms)=5289
Total time spent by all reduce tasks (ms)=14677
Total vcore-milliseconds taken by all map tasks=5289
Total vcore-milliseconds taken by all reduce tasks=14677
Total megabyte-milliseconds taken by all map tasks=21663744
Total megabyte-milliseconds taken by all reduce tasks=120233984
Map-Reduce Framework
Map input records=716
Map output records=32019
Map output bytes=343475
Map output materialized bytes=6332
Input split bytes=121
Combine input records=32019
Combine output records=461
Spilled Records=461
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=130
CPU time spent (ms)=3320
Physical memory (bytes) snapshot=2524454912
Virtual memory (bytes) snapshot=5398167552
Total committed heap usage (bytes)=2697461760
Peak Map Physical memory (bytes)=2524454912
Peak Map Virtual memory (bytes)=5398167552
File Input Format Counters
Bytes Read=215755
18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
{code}

  was:
Right now only the administrators can submit a MR job. All the other users 
including hdfs will fail with below error:
{code:java}
-bash-4.2$ ./ozone sh bucket create /volume2/bucket2
2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2, 
with Versioning false and Storage Type set to DISK
-bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History 
server at ctr-e138-1518143905142-510793-01-000004.hwx.site/172.27.79.197:10200
18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding 
for path: /user/hdfs/.staging/job_1539125785626_0003
18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized 
native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml at 
file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application 
application_1539125785626_0003
18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job: 
http://ctr-e138-1518143905142-510793-01-000005.hwx.site:8088/proxy/application_1539125785626_0003/
18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in 
uber mode : false
18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
18/10/09 23:04:36 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_000000_0, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:04:42 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_000000_1, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:04:48 INFO mapreduce.Job: Task Id : 
attempt_1539125785626_0003_r_000000_2, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
denied for user hdfs. Superuser privilege is required.
at 
org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
at 
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
at 
org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
at 
org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
at org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)

18/10/09 23:04:55 INFO mapreduce.Job: map 100% reduce 100%
18/10/09 23:04:55 INFO mapreduce.Job: Job job_1539125785626_0003 failed with 
state FAILED due to: Task failed task_1539125785626_0003_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1 killedMaps:0 
killedReduces: 0

18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
18/10/09 23:04:55 INFO mapreduce.Job: Counters: 45
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=266353
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=215876
HDFS: Number of bytes written=0
HDFS: Number of read operations=2
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
O3: Number of bytes read=0
O3: Number of bytes written=0
O3: Number of read operations=0
O3: Number of large read operations=0
O3: Number of write operations=0
Job Counters
Failed reduce tasks=4
Launched map tasks=1
Launched reduce tasks=4
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=21156
Total time spent by all reduces in occupied slots (ms)=117416
Total time spent by all map tasks (ms)=5289
Total time spent by all reduce tasks (ms)=14677
Total vcore-milliseconds taken by all map tasks=5289
Total vcore-milliseconds taken by all reduce tasks=14677
Total megabyte-milliseconds taken by all map tasks=21663744
Total megabyte-milliseconds taken by all reduce tasks=120233984
Map-Reduce Framework
Map input records=716
Map output records=32019
Map output bytes=343475
Map output materialized bytes=6332
Input split bytes=121
Combine input records=32019
Combine output records=461
Spilled Records=461
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=130
CPU time spent (ms)=3320
Physical memory (bytes) snapshot=2524454912
Virtual memory (bytes) snapshot=5398167552
Total committed heap usage (bytes)=2697461760
Peak Map Physical memory (bytes)=2524454912
Peak Map Virtual memory (bytes)=5398167552
File Input Format Counters
Bytes Read=215755
18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
{code}


> Access denied exception for user hdfs while making getContainerWithPipeline 
> call.
> ---------------------------------------------------------------------------------
>
>                 Key: HDDS-608
>                 URL: https://issues.apache.org/jira/browse/HDDS-608
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: SCM
>    Affects Versions: 0.2.1
>            Reporter: Namit Maheshwari
>            Assignee: Mukul Kumar Singh
>            Priority: Major
>             Fix For: 0.3.0, 0.4.0
>
>         Attachments: HDDS-608.001.patch
>
>
> # Right now only the administrators can submit a MR job. All the other users 
> including hdfs will fail with below error:
> {code:java}
> -bash-4.2$ ./ozone sh bucket create /volume2/bucket2
> 2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2, 
> with Versioning false and Storage Type set to DISK
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar 
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
> 18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History 
> server at ctr-e138-1518143905142-510793-01-000004.hwx.site/172.27.79.197:10200
> 18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: /user/hdfs/.staging/job_1539125785626_0003
> 18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized 
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1539125785626_0003
> 18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml 
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application 
> application_1539125785626_0003
> 18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job: 
> http://ctr-e138-1518143905142-510793-01-000005.hwx.site:8088/proxy/application_1539125785626_0003/
> 18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
> 18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in 
> uber mode : false
> 18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/09 23:04:36 INFO mapreduce.Job: Task Id : 
> attempt_1539125785626_0003_r_000000_0, Status : FAILED
> Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
> denied for user hdfs. Superuser privilege is required.
> at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
> at 
> org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
> at 
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> 18/10/09 23:04:42 INFO mapreduce.Job: Task Id : 
> attempt_1539125785626_0003_r_000000_1, Status : FAILED
> Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
> denied for user hdfs. Superuser privilege is required.
> at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
> at 
> org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
> at 
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> 18/10/09 23:04:48 INFO mapreduce.Job: Task Id : 
> attempt_1539125785626_0003_r_000000_2, Status : FAILED
> Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access 
> denied for user hdfs. Superuser privilege is required.
> at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
> at 
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
> at 
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
> at 
> org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at 
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
> at 
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
> at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at 
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
> at 
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> 18/10/09 23:04:55 INFO mapreduce.Job: map 100% reduce 100%
> 18/10/09 23:04:55 INFO mapreduce.Job: Job job_1539125785626_0003 failed with 
> state FAILED due to: Task failed task_1539125785626_0003_r_000000
> Job failed as tasks failed. failedMaps:0 failedReduces:1 killedMaps:0 
> killedReduces: 0
> 18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:55 INFO mapreduce.Job: Counters: 45
> File System Counters
> FILE: Number of bytes read=0
> FILE: Number of bytes written=266353
> FILE: Number of read operations=0
> FILE: Number of large read operations=0
> FILE: Number of write operations=0
> HDFS: Number of bytes read=215876
> HDFS: Number of bytes written=0
> HDFS: Number of read operations=2
> HDFS: Number of large read operations=0
> HDFS: Number of write operations=0
> O3: Number of bytes read=0
> O3: Number of bytes written=0
> O3: Number of read operations=0
> O3: Number of large read operations=0
> O3: Number of write operations=0
> Job Counters
> Failed reduce tasks=4
> Launched map tasks=1
> Launched reduce tasks=4
> Rack-local map tasks=1
> Total time spent by all maps in occupied slots (ms)=21156
> Total time spent by all reduces in occupied slots (ms)=117416
> Total time spent by all map tasks (ms)=5289
> Total time spent by all reduce tasks (ms)=14677
> Total vcore-milliseconds taken by all map tasks=5289
> Total vcore-milliseconds taken by all reduce tasks=14677
> Total megabyte-milliseconds taken by all map tasks=21663744
> Total megabyte-milliseconds taken by all reduce tasks=120233984
> Map-Reduce Framework
> Map input records=716
> Map output records=32019
> Map output bytes=343475
> Map output materialized bytes=6332
> Input split bytes=121
> Combine input records=32019
> Combine output records=461
> Spilled Records=461
> Failed Shuffles=0
> Merged Map outputs=0
> GC time elapsed (ms)=130
> CPU time spent (ms)=3320
> Physical memory (bytes) snapshot=2524454912
> Virtual memory (bytes) snapshot=5398167552
> Total committed heap usage (bytes)=2697461760
> Peak Map Physical memory (bytes)=2524454912
> Peak Map Virtual memory (bytes)=5398167552
> File Input Format Counters
> Bytes Read=215755
> 18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to