[
https://issues.apache.org/jira/browse/HDDS-608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644769#comment-16644769
]
Hadoop QA commented on HDDS-608:
--------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m
0s{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
12m 5s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
13m 16s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m
53s{color} | {color:red} hadoop-hdds/server-scm generated 1 new + 0 unchanged -
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 48s{color}
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
25s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 24s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/server-scm |
| | Dead store to remoteUser in
org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
At
SCMClientProtocolServer.java:org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(long)
At SCMClientProtocolServer.java:[line 189] |
| Failed junit tests | hadoop.ozone.container.TestCloseContainerWatcher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-608 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12943201/HDDS-608.001.patch |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit shadedclient findbugs checkstyle |
| uname | Linux a5f41acdd700 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b39b802 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| findbugs |
https://builds.apache.org/job/PreCommit-HDDS-Build/1329/artifact/out/new-findbugs-hadoop-hdds_server-scm.html
|
| unit |
https://builds.apache.org/job/PreCommit-HDDS-Build/1329/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDDS-Build/1329/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 10000) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output |
https://builds.apache.org/job/PreCommit-HDDS-Build/1329/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> Mapreduce example fails with Access denied for user hdfs. Superuser privilege
> is required
> -----------------------------------------------------------------------------------------
>
> Key: HDDS-608
> URL: https://issues.apache.org/jira/browse/HDDS-608
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Reporter: Namit Maheshwari
> Priority: Major
> Attachments: HDDS-608.001.patch
>
>
> Right now only the administrators can submit a MR job. All the other users
> including hdfs will fail with below error:
> {code:java}
> -bash-4.2$ ./ozone sh bucket create /volume2/bucket2
> 2018-10-09 23:03:46,399 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
> 2018-10-09 23:03:47,473 INFO rpc.RpcClient: Creating Bucket: volume2/bucket2,
> with Versioning false and Storage Type set to DISK
> -bash-4.2$ /usr/hdp/current/hadoop-client/bin/hadoop jar
> /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar
> wordcount /tmp/mr_jobs/input/ o3://bucket2.volume2/mr_job
> 18/10/09 23:04:08 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:10 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:10 INFO client.AHSProxy: Connecting to Application History
> server at ctr-e138-1518143905142-510793-01-000004.hwx.site/172.27.79.197:10200
> 18/10/09 23:04:10 INFO client.ConfiguredRMFailoverProxyProvider: Failing over
> to rm2
> 18/10/09 23:04:10 INFO mapreduce.JobResourceUploader: Disabling Erasure
> Coding for path: /user/hdfs/.staging/job_1539125785626_0003
> 18/10/09 23:04:11 INFO input.FileInputFormat: Total input files to process : 1
> 18/10/09 23:04:11 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library
> 18/10/09 23:04:11 INFO lzo.LzoCodec: Successfully loaded & initialized
> native-lzo library [hadoop-lzo rev 5d6248d8d690f8456469979213ab2e9993bfa2e9]
> 18/10/09 23:04:11 INFO mapreduce.JobSubmitter: number of splits:1
> 18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1539125785626_0003
> 18/10/09 23:04:12 INFO mapreduce.JobSubmitter: Executing with tokens: []
> 18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:12 INFO conf.Configuration: found resource resource-types.xml
> at file:/etc/hadoop/3.0.3.0-63/0/resource-types.xml
> 18/10/09 23:04:12 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:12 INFO impl.YarnClientImpl: Submitted application
> application_1539125785626_0003
> 18/10/09 23:04:12 INFO mapreduce.Job: The url to track the job:
> http://ctr-e138-1518143905142-510793-01-000005.hwx.site:8088/proxy/application_1539125785626_0003/
> 18/10/09 23:04:12 INFO mapreduce.Job: Running job: job_1539125785626_0003
> 18/10/09 23:04:22 INFO mapreduce.Job: Job job_1539125785626_0003 running in
> uber mode : false
> 18/10/09 23:04:22 INFO mapreduce.Job: map 0% reduce 0%
> 18/10/09 23:04:30 INFO mapreduce.Job: map 100% reduce 0%
> 18/10/09 23:04:36 INFO mapreduce.Job: Task Id :
> attempt_1539125785626_0003_r_000000_0, Status : FAILED
> Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access
> denied for user hdfs. Superuser privilege is required.
> at
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
> at
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> at
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
> at
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
> at
> org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
> at
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
> at
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
> at
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
> at
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> 18/10/09 23:04:42 INFO mapreduce.Job: Task Id :
> attempt_1539125785626_0003_r_000000_1, Status : FAILED
> Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access
> denied for user hdfs. Superuser privilege is required.
> at
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
> at
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> at
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
> at
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
> at
> org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
> at
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
> at
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
> at
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
> at
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> 18/10/09 23:04:48 INFO mapreduce.Job: Task Id :
> attempt_1539125785626_0003_r_000000_2, Status : FAILED
> Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access
> denied for user hdfs. Superuser privilege is required.
> at
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.checkAdminAccess(StorageContainerManager.java:830)
> at
> org.apache.hadoop.hdds.scm.server.SCMClientProtocolServer.getContainerWithPipeline(SCMClientProtocolServer.java:190)
> at
> org.apache.hadoop.ozone.protocolPB.StorageContainerLocationProtocolServerSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolServerSideTranslatorPB.java:128)
> at
> org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:12392)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
> at org.apache.hadoop.ipc.Client.call(Client.java:1443)
> at org.apache.hadoop.ipc.Client.call(Client.java:1353)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy19.getContainerWithPipeline(Unknown Source)
> at
> org.apache.hadoop.hdds.scm.protocolPB.StorageContainerLocationProtocolClientSideTranslatorPB.getContainerWithPipeline(StorageContainerLocationProtocolClientSideTranslatorPB.java:156)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.checkKeyLocationInfo(ChunkGroupOutputStream.java:188)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.allocateNewBlock(ChunkGroupOutputStream.java:476)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.handleWrite(ChunkGroupOutputStream.java:271)
> at
> org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.write(ChunkGroupOutputStream.java:250)
> at
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:47)
> at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
> at java.io.DataOutputStream.write(DataOutputStream.java:107)
> at
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.writeObject(TextOutputFormat.java:78)
> at
> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat$LineRecordWriter.write(TextOutputFormat.java:93)
> at
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:559)
> at
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at
> org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
> at
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:64)
> at
> org.apache.hadoop.examples.WordCount$IntSumReducer.reduce(WordCount.java:52)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
> at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:628)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:390)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> 18/10/09 23:04:55 INFO mapreduce.Job: map 100% reduce 100%
> 18/10/09 23:04:55 INFO mapreduce.Job: Job job_1539125785626_0003 failed with
> state FAILED due to: Task failed task_1539125785626_0003_r_000000
> Job failed as tasks failed. failedMaps:0 failedReduces:1 killedMaps:0
> killedReduces: 0
> 18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
> 18/10/09 23:04:55 INFO mapreduce.Job: Counters: 45
> File System Counters
> FILE: Number of bytes read=0
> FILE: Number of bytes written=266353
> FILE: Number of read operations=0
> FILE: Number of large read operations=0
> FILE: Number of write operations=0
> HDFS: Number of bytes read=215876
> HDFS: Number of bytes written=0
> HDFS: Number of read operations=2
> HDFS: Number of large read operations=0
> HDFS: Number of write operations=0
> O3: Number of bytes read=0
> O3: Number of bytes written=0
> O3: Number of read operations=0
> O3: Number of large read operations=0
> O3: Number of write operations=0
> Job Counters
> Failed reduce tasks=4
> Launched map tasks=1
> Launched reduce tasks=4
> Rack-local map tasks=1
> Total time spent by all maps in occupied slots (ms)=21156
> Total time spent by all reduces in occupied slots (ms)=117416
> Total time spent by all map tasks (ms)=5289
> Total time spent by all reduce tasks (ms)=14677
> Total vcore-milliseconds taken by all map tasks=5289
> Total vcore-milliseconds taken by all reduce tasks=14677
> Total megabyte-milliseconds taken by all map tasks=21663744
> Total megabyte-milliseconds taken by all reduce tasks=120233984
> Map-Reduce Framework
> Map input records=716
> Map output records=32019
> Map output bytes=343475
> Map output materialized bytes=6332
> Input split bytes=121
> Combine input records=32019
> Combine output records=461
> Spilled Records=461
> Failed Shuffles=0
> Merged Map outputs=0
> GC time elapsed (ms)=130
> CPU time spent (ms)=3320
> Physical memory (bytes) snapshot=2524454912
> Virtual memory (bytes) snapshot=5398167552
> Total committed heap usage (bytes)=2697461760
> Peak Map Physical memory (bytes)=2524454912
> Peak Map Virtual memory (bytes)=5398167552
> File Input Format Counters
> Bytes Read=215755
> 18/10/09 23:04:55 INFO conf.Configuration: Removed undeclared tags:
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]