[ https://issues.apache.org/jira/browse/HDDS-1293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16800556#comment-16800556 ]
Hadoop QA commented on HDDS-1293: --------------------------------- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 5s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 6s{color} | {color:green} hadoop-hdds in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 6s{color} | {color:red} hadoop-ozone in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce base: https://builds.apache.org/job/PreCommit-HDDS-Build/2575/artifact/out/Dockerfile | | JIRA Issue | HDDS-1293 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12963610/HDDS-1293.000.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1faa370ee62b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh | | git revision | trunk / 128dd91 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_191 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDDS-Build/2575/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDDS-Build/2575/testReport/ | | Max. process+thread count | 4997 (vs. ulimit of 10000) | | modules | C: hadoop-hdds/common U: hadoop-hdds/common | | Console output | https://builds.apache.org/job/PreCommit-HDDS-Build/2575/console | | Powered by | Apache Yetus 0.10.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException > ------------------------------------------------------------- > > Key: HDDS-1293 > URL: https://issues.apache.org/jira/browse/HDDS-1293 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: SCM > Affects Versions: 0.4.0 > Reporter: Mukul Kumar Singh > Assignee: Shashikant Banerjee > Priority: Major > Attachments: HDDS-1293.000.patch > > > ExcludeList#getProtoBuf throws ArrayIndexOutOfBoundsException because > getProtoBuf uses parallelStreams > {code} > 2019-03-17 16:24:35,774 INFO retry.RetryInvocationHandler > (RetryInvocationHandler.java:log(411)) - > com.google.protobuf.ServiceException: > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > 3 > at java.util.ArrayList.add(ArrayList.java:463) > at > org.apache.hadoop.hdds.protocol.proto.HddsProtos$ExcludeListProto$Builder.addContainerIds(HddsProtos.java:12904) > at > org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.lambda$getProtoBuf$3(ExcludeList.java:89) > at > java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) > at > java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) > at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) > at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) > at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) > at > java.util.concurrent.ForkJoinPool.helpComplete(ForkJoinPool.java:1870) > at > java.util.concurrent.ForkJoinPool.externalHelpComplete(ForkJoinPool.java:2467) > at > java.util.concurrent.ForkJoinTask.externalAwaitDone(ForkJoinTask.java:324) > at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:405) > at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) > at > java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) > at > java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174) > at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) > at > java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) > at > java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) > at > org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList.getProtoBuf(ExcludeList.java:89) > at > org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.allocateBlock(ScmBlockLocationProtocolClientSideTranslatorPB.java:100) > at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66) > at com.sun.proxy.$Proxy22.allocateBlock(Unknown Source) > at > org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:275) > at > org.apache.hadoop.ozone.om.KeyManagerImpl.allocateBlock(KeyManagerImpl.java:246) > at > org.apache.hadoop.ozone.om.OzoneManager.allocateBlock(OzoneManager.java:2023) > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.allocateBlock(OzoneManagerRequestHandler.java:631) > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handle(OzoneManagerRequestHandler.java:231) > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.java:131) > at > org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:86) > at > org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) > , while invoking $Proxy28.submitRequest over null(localhost:59024). Trying to > failover immediately. > 2019-03-17 16:24:35,783 INFO om.KeyManagerImpl > (KeyManagerImpl.java:allocateBlock(271)) - allocate block > key:pool-9-thread-7-1581351327 exclude:datanodes:containers:#6#1#9#5pipelines: > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org