Re: Does VOTE necessary to create a child repo?
Moving the thread to the dev lists. Thanks +Vinod > On Sep 23, 2019, at 11:43 PM, Vinayakumar B wrote: > > Thanks Marton, > > Current created 'hadoop-thirdparty' repo is empty right now. > Whether to use that repo for shaded artifact or not will be monitored in > HADOOP-13363 umbrella jira. Please feel free to join the discussion. > > There is no existing codebase is being moved out of hadoop repo. So I think > right now we are good to go. > > -Vinay > > On Mon, Sep 23, 2019 at 11:38 PM Marton Elek wrote: > >> >> I am not sure if it's defined when is a vote required. >> >> https://www.apache.org/foundation/voting.html >> >> Personally I think it's a big enough change to send a notification to the >> dev lists with a 'lazy consensus' closure >> >> Marton >> >> On 2019/09/23 17:46:37, Vinayakumar B wrote: >>> Hi, >>> >>> As discussed in HADOOP-13363, protobuf 3.x jar (and may be more in >> future) >>> will be kept as a shaded artifact in a separate repo, which will be >>> referred as dependency in hadoop modules. This approach avoids shading >> of >>> every submodule during build. >>> >>> So question is does any VOTE required before asking to create a git repo? >>> >>> On selfserve platform https://gitbox.apache.org/setup/newrepo.html >>> I can access see that, requester should be PMC. >>> >>> Wanted to confirm here first. >>> >>> -Vinay >>> >> >> - >> To unsubscribe, e-mail: private-unsubscr...@hadoop.apache.org >> For additional commands, e-mail: private-h...@hadoop.apache.org >> >> - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2195) Apply spotbugs check to test code
Attila Doroszlai created HDDS-2195: -- Summary: Apply spotbugs check to test code Key: HDDS-2195 URL: https://issues.apache.org/jira/browse/HDDS-2195 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: test Reporter: Attila Doroszlai Assignee: Attila Doroszlai The goal of this task is to [enable Spotbugs to run on test code|https://spotbugs.github.io/spotbugs-maven-plugin/spotbugs-mojo.html#includeTests], and fix all issues it reports (both to improve code and to avoid breaking CI). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[DISCUSS] Release Docs pointers Hadoop site
Hi Folks, At present, http://hadoop.apache.org/docs/stable/ points to *Apache Hadoop 3.2.1* http://hadoop.apache.org/docs/current/ points to *Apache Hadoop 3.2.1* http://hadoop.apache.org/docs/stable2/ points to *Apache Hadoop 2.9.2* http://hadoop.apache.org/docs/current2/ points to *Apache Hadoop 2.9.2* 3.2.1 is released last day. *Now 3.1.3 has completed voting* and it is in the final stages of staging As per me, a) 3.2.1 will be still be pointing to http://hadoop.apache.org/docs/stable/ ? b) 3.1.3 should be pointing to http://hadoop.apache.org/docs/current/ ? Now my questions, 1. But if the release manager of 3.1 line thinks 3.1.3 is stable, and 3.2 line is also in stable state, which release should get precedence to be called as *stable* in any release line (2.x or 3.x) ? or do we need a vote or discuss thread to decide which release shall be called as stable per release line? 2. Given 3.2.1 is released and pointing to 3.2.1 as stable, then when 3.1.3 is getting released now, could http://hadoop.apache.org/docs/current/ shall be updated to 3.1.3 ? is it the norms ? Thanks Sunil
[jira] [Resolved] (HDDS-2193) Adding container related metrics in SCM
[ https://issues.apache.org/jira/browse/HDDS-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer resolved HDDS-2193. Fix Version/s: 0.5.0 Resolution: Fixed > Adding container related metrics in SCM > --- > > Key: HDDS-2193 > URL: https://issues.apache.org/jira/browse/HDDS-2193 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: SCM >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 1h > Remaining Estimate: 0h > > This jira aims to add more container related metrics to SCM. > Following metrics will be added as part of this jira: > * Number of successful create container calls > * Number of failed create container calls > * Number of successful delete container calls > * Number of failed delete container calls > * Number of list container ops. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2194) Replication of Container fails with "Only closed containers could be exported"
Mukul Kumar Singh created HDDS-2194: --- Summary: Replication of Container fails with "Only closed containers could be exported" Key: HDDS-2194 URL: https://issues.apache.org/jira/browse/HDDS-2194 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Affects Versions: 0.5.0 Reporter: Mukul Kumar Singh Replication of Container fails with "Only closed containers could be exported" cc: [~nanda] {code} 2019-09-26 15:00:17,640 [grpc-default-executor-13] INFO replication.GrpcReplicationService (GrpcReplicationService.java:download(57)) - Streaming container data (37) to other datanode Sep 26, 2019 3:00:17 PM org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor run SEVERE: Exception while executing runnable org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed@70e641f2 java.lang.IllegalStateException: Only closed containers could be exported: ContainerId=37 2019-09-26 15:00:17,644 [grpc-default-executor-17] ERROR replication.GrpcReplicationClient (GrpcReplicationClient.java:onError(142)) - Container download was unsuccessfull at org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.exportContainerData(KeyValueContainer.java:527) org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNKNOWN at org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.exportContainer(KeyValueHandler.java:875) at org.apache.ratis.thirdparty.io.grpc.Status.asRuntimeException(Status.java:526) at org.apache.hadoop.ozone.container.ozoneimpl.ContainerController.exportContainer(ContainerController.java:134) at org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:434) at org.apache.hadoop.ozone.container.replication.OnDemandContainerReplicationSource.copyData(OnDemandContainerReplicationSource at org.apache.ratis.thirdparty.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) .java:64) at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) at org.apache.hadoop.ozone.container.replication.GrpcReplicationService.download(GrpcReplicationService.java:63) at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClient at org.apache.hadoop.hdds.protocol.datanode.proto.IntraDatanodeProtocolServiceGrpc$MethodHandlers.invoke(IntraDatanodeProtocolSCallListener.java:40) erviceGrpc.java:217) at org.apache.ratis.thirdparty.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:678) at org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls. at org.apache.ratis.thirdparty.io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) java:171) at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) at org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:283) at org.apache.ratis.thirdparty.io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClient at org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:710) CallListener.java:40) at org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) at org.apache.ratis.thirdparty.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.ja at org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) va:397) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) at java.lang.Thread.run(Thread.java:748) at org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546) at org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467) at org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584) at org.apache.ratis.thirdparty.io.grpc.i
[jira] [Created] (HDDS-2193) Adding container related metrics in SCM
Bharat Viswanadham created HDDS-2193: Summary: Adding container related metrics in SCM Key: HDDS-2193 URL: https://issues.apache.org/jira/browse/HDDS-2193 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: SCM Reporter: Bharat Viswanadham Assignee: Supratim Deka This jira aims to add more container related metrics to SCM. Following metrics will be added as part of this jira: * Number of containers * Number of open containers * Number of closed containers * Number of quasi closed containers * Number of closing containers * Number of successful create container calls * Number of failed create container calls * Number of successful delete container calls * Number of failed delete container calls * Number of successful container report processing * Number of failed container report processing * Number of successful incremental container report processing * Number of failed incremental container report processing -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2192) Optimize Ozone CLI commands to send one ACL request to authorizers instead of sending multiple requests
Vivek Ratnavel Subramanian created HDDS-2192: Summary: Optimize Ozone CLI commands to send one ACL request to authorizers instead of sending multiple requests Key: HDDS-2192 URL: https://issues.apache.org/jira/browse/HDDS-2192 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone CLI Affects Versions: 0.5.0 Reporter: Vivek Ratnavel Subramanian Assignee: Vivek Ratnavel Subramanian Currently, when trying to read a key, three requests are sent to the authorizer: volume read, bucket read, key read. It should instead be just one request to the authorizer. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2191) Handle bucket create request in OzoneNativeAuthorizer
Vivek Ratnavel Subramanian created HDDS-2191: Summary: Handle bucket create request in OzoneNativeAuthorizer Key: HDDS-2191 URL: https://issues.apache.org/jira/browse/HDDS-2191 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Security Affects Versions: 0.5.0 Reporter: Vivek Ratnavel Subramanian Assignee: Vivek Ratnavel Subramanian OzoneNativeAuthorizer should handle bucket create request when the bucket object is not yet created. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2190) Ozone administrators should be able to list all the volumes
Vivek Ratnavel Subramanian created HDDS-2190: Summary: Ozone administrators should be able to list all the volumes Key: HDDS-2190 URL: https://issues.apache.org/jira/browse/HDDS-2190 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Manager Affects Versions: 0.4.1 Reporter: Vivek Ratnavel Subramanian Assignee: Vivek Ratnavel Subramanian Currently, ozone administrators are not able to list all the volumes in the system. `ozone sh volume ls` only lists the volumes owned by the admin user. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-14877) Remove unused imports from TestClose.java
Lisheng Sun created HDFS-14877: -- Summary: Remove unused imports from TestClose.java Key: HDFS-14877 URL: https://issues.apache.org/jira/browse/HDFS-14877 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Lisheng Sun Assignee: Lisheng Sun There 1 unused import in TestClose.java. Let's clean them up. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-14876) Remove unused imports from TestBlockMissingException.java
Lisheng Sun created HDFS-14876: -- Summary: Remove unused imports from TestBlockMissingException.java Key: HDFS-14876 URL: https://issues.apache.org/jira/browse/HDFS-14876 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Lisheng Sun Assignee: Lisheng Sun There 2 unused imports in TestBlockMissingException.java. Let's clean them up. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2180) Add Object ID and update ID on VolumeList Object
[ https://issues.apache.org/jira/browse/HDDS-2180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer resolved HDDS-2180. Fix Version/s: 0.5.0 Resolution: Fixed > Add Object ID and update ID on VolumeList Object > > > Key: HDDS-2180 > URL: https://issues.apache.org/jira/browse/HDDS-2180 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Anu Engineer >Assignee: Anu Engineer >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > This JIRA proposes to add Object ID and Update IDs to the Volume List Object. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2189) Datanode should send PipelineAction on RaftServer failure
Lokesh Jain created HDDS-2189: - Summary: Datanode should send PipelineAction on RaftServer failure Key: HDDS-2189 URL: https://issues.apache.org/jira/browse/HDDS-2189 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Datanode Reporter: Lokesh Jain {code:java} 2019-09-26 08:03:07,152 ERROR org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker: 664c4e90-08f3-46c9-a073-c93ef2a55da3@group-93F633896F08-SegmentedRaftLogWorker hit exception java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:694) at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41) at org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72) at org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566) at org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289) at java.lang.Thread.run(Thread.java:748) 2019-09-26 08:03:07,155 INFO org.apache.ratis.server.impl.RaftServerImpl: 664c4e90-08f3-46c9-a073-c93ef2a55da3@group-93F633896F08: shutdown {code} On RaftServer shutdown datanode should send a PipelineAction denoting that the pipeline has been closed exceptionally in the datanode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2188) Implement LocatedFileStatus & getFileBlockLocations to provide node/localization information to Yarn/Mapreduce
Mukul Kumar Singh created HDDS-2188: --- Summary: Implement LocatedFileStatus & getFileBlockLocations to provide node/localization information to Yarn/Mapreduce Key: HDDS-2188 URL: https://issues.apache.org/jira/browse/HDDS-2188 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Filesystem Affects Versions: 0.5.0 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh For applications like Hive/MapReduce to take advantage of the data locality in Ozone, Ozone should return the location of the Ozone blocks. This is needed for better read performance for Hadoop Applications. {code} if (file instanceof LocatedFileStatus) { blkLocations = ((LocatedFileStatus) file).getBlockLocations(); } else { blkLocations = fs.getFileBlockLocations(file, 0, length); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2187) ozone-mr test fails with No FileSystem for scheme "o3fs"
Attila Doroszlai created HDDS-2187: -- Summary: ozone-mr test fails with No FileSystem for scheme "o3fs" Key: HDDS-2187 URL: https://issues.apache.org/jira/browse/HDDS-2187 Project: Hadoop Distributed Data Store Issue Type: Bug Components: test Reporter: Attila Doroszlai HDDS-2101 changed how Ozone filesystem provider is configured. {{ozone-mr}} tests [started failing|https://github.com/elek/ozone-ci/blob/2f2c99652af6b26a95f08eece9e545f0d72ccf45/pr/pr-hdds-2101-rtz55/acceptance/output.log#L255-L263], but it [wasn't noticed|https://github.com/elek/ozone-ci/blob/master/pr/pr-hdds-2101-rtz55/acceptance/result] due to HDDS-2185. {code} Running command 'ozone fs -mkdir /user' ${output} = mkdir: No FileSystem for scheme "o3fs" {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2186) Fix tests using MiniOzoneCluster for its memory related exceptions
Li Cheng created HDDS-2186: -- Summary: Fix tests using MiniOzoneCluster for its memory related exceptions Key: HDDS-2186 URL: https://issues.apache.org/jira/browse/HDDS-2186 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Li Cheng After multi-raft usage, MiniOzoneCluster seems to be fishy and reports a bunch of 'out of memory' exceptions in ratis. Attached sample stacks. 2019-09-26 15:12:22,824 [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker] ERROR segmented.SegmentedRaftLogWorker (SegmentedRaftLogWorker.java:run(323)) - 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker hit exception2019-09-26 15:12:22,824 [2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker] ERROR segmented.SegmentedRaftLogWorker (SegmentedRaftLogWorker.java:run(323)) - 2e1e11ca-833a-4fbc-b948-3d93fc8e7288@group-218F3868CEA9-SegmentedRaftLogWorker hit exceptionjava.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:694) at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at org.apache.ratis.server.raftlog.segmented.BufferedWriteChannel.(BufferedWriteChannel.java:41) at org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogOutputStream.(SegmentedRaftLogOutputStream.java:72) at org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker$StartLogSegment.execute(SegmentedRaftLogWorker.java:566) at org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogWorker.run(SegmentedRaftLogWorker.java:289) at java.lang.Thread.run(Thread.java:748) which leads to: 2019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990 for c1f4d375-683b-42fe-983b-428a63aa88032019-09-26 15:12:23,029 [RATISCREATEPIPELINE1] ERROR pipeline.RatisPipelineProvider (RatisPipelineProvider.java:lambda$null$2(181)) - Failed invoke Ratis rpc org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider$$Lambda$297/1222454951@55d1e990 for c1f4d375-683b-42fe-983b-428a63aa8803org.apache.ratis.protocol.TimeoutIOException: deadline exceeded after 2999881264ns at org.apache.ratis.grpc.GrpcUtil.tryUnwrapException(GrpcUtil.java:82) at org.apache.ratis.grpc.GrpcUtil.unwrapException(GrpcUtil.java:75) at org.apache.ratis.grpc.client.GrpcClientProtocolClient.blockingCall(GrpcClientProtocolClient.java:178) at org.apache.ratis.grpc.client.GrpcClientProtocolClient.groupAdd(GrpcClientProtocolClient.java:147) at org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:94) at org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:278) at org.apache.ratis.client.impl.RaftClientImpl.groupAdd(RaftClientImpl.java:205) at org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$initializePipeline$1(RatisPipelineProvider.java:142) at org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$null$2(RatisPipelineProvider.java:177) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:583) at org.apache.hadoop.hdds.scm.pipeline.RatisPipelineProvider.lambda$callRatisRpc$3(RatisPipelineProvider.java:171) at java.util.concurrent.ForkJoinTask$AdaptedRunnableAction.exec(ForkJoinTask.java:1386) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)Caused by: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 2999881264ns at org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls.toStatusRuntimeExce
[jira] [Created] (HDDS-2185) createmrenv failure not reflected in acceptance test result
Attila Doroszlai created HDDS-2185: -- Summary: createmrenv failure not reflected in acceptance test result Key: HDDS-2185 URL: https://issues.apache.org/jira/browse/HDDS-2185 Project: Hadoop Distributed Data Store Issue Type: Bug Components: test Reporter: Attila Doroszlai Assignee: Attila Doroszlai Part of the MR tests fail, but it's not reflected in the test report, which shows all green. {noformat:title=https://github.com/elek/ozone-ci/blob/679228c146628cd4d1a416e1ffc9c513d19fb43d/pr/pr-hdds-2179-9bnxk/acceptance/output.log#L718-L730} == hadoop31-createmrenv :: Create directories required for MR test == Create test volume, bucket and key| PASS | -- Create user dir for hadoop| FAIL | 1 != 0 -- hadoop31-createmrenv :: Create directories required for MR test | FAIL | 2 critical tests, 1 passed, 1 failed 2 tests total, 1 passed, 1 failed == Output: /tmp/smoketest/hadoop31/result/robot-hadoop31-hadoop31-createmrenv-scm.xml {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/ [Sep 26, 2019 12:50:05 AM] (jhung) Addendum to YARN-9730. Support forcing configured partitions to be -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.util.TestReadWriteDiskValidator hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.TestMultipleNNPortQOP hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.yarn.client.api.impl.TestAMRMClient cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/456/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.
[jira] [Created] (HDDS-2184) Rename ozone scmcli to ozone admin
Marton Elek created HDDS-2184: - Summary: Rename ozone scmcli to ozone admin Key: HDDS-2184 URL: https://issues.apache.org/jira/browse/HDDS-2184 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Marton Elek Originally ozone scmcli designed to be used only by the developers. A very cryptic name is chosen intentionally to frighten away the beginner users. As we realized recently we started to use "ozone scmcli" as a generic admin tool. More and more tools has been added which are useful not only for the developers but for the administrators. Therefore I suggest to rename "ozone scmcli" to something more meaningful. For example to "ozone admin" -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2183) Container and pipline subcommands of scmcli should be grouped
Marton Elek created HDDS-2183: - Summary: Container and pipline subcommands of scmcli should be grouped Key: HDDS-2183 URL: https://issues.apache.org/jira/browse/HDDS-2183 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Marton Elek Once upon an time when we had only a few subcommands under `ozone scmcli` to manage containers. Now we have many admin commands some of them are grouped to a subcommand (eg. safemode, replicationmanager) some of are not. I propose to group the container and pipeline related commands: Instead of "ozone scmcli info" use "ozone scmcli container info" Instead of "ozone scmcli list" use "ozone scmcli container list" Instead of "ozone scmcli listPipelines" use "ozone scmcli pipeline list" And so on... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org