[GitHub] [hadoop] hadoop-yetus commented on issue #1045: HDDS-1741 Fix prometheus configuration in ozoneperf example cluster

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1045: HDDS-1741 Fix prometheus configuration 
in ozoneperf example cluster
URL: https://github.com/apache/hadoop/pull/1045#issuecomment-507960874
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 97 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 552 | trunk passed |
   | +1 | compile | 265 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | shadedclient | 1672 | branch has errors when building and testing our 
client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 507 | the patch passed |
   | +1 | compile | 267 | the patch passed |
   | +1 | javac | 267 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 712 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 360 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2343 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6501 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1045 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint |
   | uname | Linux 43c4a38a77ea 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 15d82fc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/3/testReport/ |
   | Max. process+thread count | 5058 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/3/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1049: HDDS-1758. Add replication and key deletion tests to MiniOzoneChaosCluster. Contributed by Mukul Kumar Singh.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1049: HDDS-1758. Add replication and key 
deletion tests to MiniOzoneChaosCluster. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1049#issuecomment-507951870
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 51 | Maven dependency ordering for branch |
   | +1 | mvninstall | 485 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 888 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 313 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 504 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 440 | the patch passed |
   | +1 | compile | 269 | the patch passed |
   | +1 | javac | 269 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 650 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 519 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 241 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1247 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 6295 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1049/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1049 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cc9804cd8014 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 15d82fc |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1049/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1049/1/testReport/ |
   | Max. process+thread count | 5070 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/integration-test U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1049/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 closed pull request #829: HDDS-1550. MiniOzoneCluster is not shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.

2019-07-02 Thread GitBox
mukul1987 closed pull request #829: HDDS-1550. MiniOzoneCluster is not shutting 
down all the threads during shutdown. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/829
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 opened a new pull request #1050: HDDS-1550. MiniOzoneCluster is not shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.

2019-07-02 Thread GitBox
mukul1987 opened a new pull request #1050: HDDS-1550. MiniOzoneCluster is not 
shutting down all the threads during shutdown. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1050
 
 
   Creating a new pull request. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1004: HDDS-1718 : Increase Ratis Leader election timeout default to 10 seconds

2019-07-02 Thread GitBox
nandakumar131 commented on issue #1004: HDDS-1718 : Increase Ratis Leader 
election timeout default to 10 seconds
URL: https://github.com/apache/hadoop/pull/1004#issuecomment-507942072
 
 
   @avijayanhwx `TestCloseContainerCommandHandler` failure seems related to 
this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1036: YARN-9655: add a UT for lost applicationPriority in FederationInterceptor

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1036:  YARN-9655: add a UT for lost 
applicationPriority in FederationInterceptor
URL: https://github.com/apache/hadoop/pull/1036#issuecomment-507931777
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1092 | trunk passed |
   | +1 | compile | 67 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 41 | trunk passed |
   | +1 | shadedclient | 775 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 79 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 76 | trunk passed |
   | -0 | patch | 98 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 57 | the patch passed |
   | +1 | javac | 57 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 39 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 885 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | the patch passed |
   | +1 | findbugs | 87 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 1285 | hadoop-yarn-server-nodemanager in the patch passed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 4718 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1036/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1036 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 762fdac22979 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 91cc197 |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1036/4/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1036/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 opened a new pull request #1049: HDDS-1758. Add replication and key deletion tests to MiniOzoneChaosCluster. Contributed by Mukul Kumar Singh.

2019-07-02 Thread GitBox
mukul1987 opened a new pull request #1049: HDDS-1758. Add replication and key 
deletion tests to MiniOzoneChaosCluster. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1049
 
 
   This patch adds key deletion and replication manager capability to 
MiniOzoneChaosCluster.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memor…

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memor…
URL: https://github.com/apache/hadoop/pull/1011#issuecomment-507924006
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 33 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1044 | trunk passed |
   | +1 | compile | 1044 | trunk passed |
   | +1 | checkstyle | 130 | trunk passed |
   | +1 | mvnsite | 137 | trunk passed |
   | +1 | shadedclient | 928 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 120 | trunk passed |
   | 0 | spotbugs | 167 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 275 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 104 | the patch passed |
   | +1 | compile | 971 | the patch passed |
   | +1 | javac | 971 | the patch passed |
   | -0 | checkstyle | 145 | root: The patch generated 2 new + 245 unchanged - 
1 fixed = 247 total (was 246) |
   | +1 | mvnsite | 141 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 658 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | the patch passed |
   | +1 | findbugs | 323 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 563 | hadoop-common in the patch passed. |
   | -1 | unit | 5064 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 65 | The patch does not generate ASF License warnings. |
   | | | 11955 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestEditLogRace |
   |   | hadoop.hdfs.server.namenode.TestEditLogJournalFailures |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
   |   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
   |   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithHA |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
   |   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
   |   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.tools.TestDFSAdminWithHA |
   |   | hadoop.hdfs.TestFileAppend |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.TestHDFSConcat |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.snapshot.TestDiffListBySkipList |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing |
   |   | hadoop.hdfs.server.namenode.TestMetaSave |
   |   | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotListing |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   |   | hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots |
   |   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl |
   |   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
   |   | hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.server.namenode.TestNameNodeAcl |
   |   | hadoop.hdfs.TestDFSStorageStateRecovery |
   |   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
   |   | 
hadoop.hdfs.server.datano

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
bharatviswa504 commented on a change in pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299764094
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
+// client should expect, in terms of current size of key. If client sets
+// a value, then this value is used, otherwise, we allocate a single
+// block which is the current size, if read by the client.
+final long reque

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
bharatviswa504 commented on a change in pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299763429
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
+// client should expect, in terms of current size of key. If client sets
+// a value, then this value is used, otherwise, we allocate a single
+// block which is the current size, if read by the client.
+final long reque

[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299762796
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
+// client should expect, in terms of current size of key. If client sets
+// a value, then this value is used, otherwise, we allocate a single
+// block which is the current size, if read by the client.
+final long requestedSize =

[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
bharatviswa504 commented on a change in pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299762555
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
+// client should expect, in terms of current size of key. If client sets
+// a value, then this value is used, otherwise, we allocate a single
+// block which is the current size, if read by the client.
+final long reque

[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299762339
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
+// client should expect, in terms of current size of key. If client sets
+// a value, then this value is used, otherwise, we allocate a single
+// block which is the current size, if read by the client.
+final long requestedSize =

[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299762216
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
 
 Review comment:
   I think I understood the intention but it should be reworded a bit.


This is an automated message from the Apache Git Service.
To respond to the message, please

[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299762127
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
 
 Review comment:
   This comment is a little unclear. Could you please clarify?


This is an automated message from the Apache Git Service.
To respond to the message, please log on 

[GitHub] [hadoop] hadoop-yetus commented on issue #1029: HDDS-1384. TestBlockOutputStreamWithFailures is failing

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1029: HDDS-1384. 
TestBlockOutputStreamWithFailures is failing
URL: https://github.com/apache/hadoop/pull/1029#issuecomment-507916770
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 109 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 543 | trunk passed |
   | +1 | compile | 265 | trunk passed |
   | +1 | checkstyle | 71 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 890 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 307 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 494 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 429 | the patch passed |
   | +1 | compile | 249 | the patch passed |
   | +1 | javac | 249 | the patch passed |
   | -0 | checkstyle | 36 | hadoop-hdds: The patch generated 7 new + 0 
unchanged - 0 fixed = 7 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 509 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 292 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1560 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6804 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1029/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1029 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fef3e86339d6 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 91cc197 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1029/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1029/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1029/2/testReport/ |
   | Max. process+thread count | 5407 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1029/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hunshenshi commented on issue #1036: YARN-9655: add a UT for lost applicationPriority in FederationInterceptor

2019-07-02 Thread GitBox
hunshenshi commented on issue #1036:  YARN-9655: add a UT for lost 
applicationPriority in FederationInterceptor
URL: https://github.com/apache/hadoop/pull/1036#issuecomment-507914353
 
 
   sure, I fix it. delete check not null, add the actual priority


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #1036: YARN-9655: add a UT for lost applicationPriority in FederationInterceptor

2019-07-02 Thread GitBox
goiri commented on issue #1036:  YARN-9655: add a UT for lost 
applicationPriority in FederationInterceptor
URL: https://github.com/apache/hadoop/pull/1036#issuecomment-507913975
 
 
   We can still check for 0, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hunshenshi commented on issue #1036: YARN-9655: add a UT for lost applicationPriority in FederationInterceptor

2019-07-02 Thread GitBox
hunshenshi commented on issue #1036:  YARN-9655: add a UT for lost 
applicationPriority in FederationInterceptor
URL: https://github.com/apache/hadoop/pull/1036#issuecomment-507912637
 
 
   @goiri The actual priority default is 0. Because the error is NPE,so I just 
check that it is not null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #1036: YARN-9655: add a UT for lost applicationPriority in FederationInterceptor

2019-07-02 Thread GitBox
goiri commented on issue #1036:  YARN-9655: add a UT for lost 
applicationPriority in FederationInterceptor
URL: https://github.com/apache/hadoop/pull/1036#issuecomment-507891702
 
 
   The title looks good.
   Can we check for the actual priority?
   I don't get why Yetus couldn't compile.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299730867
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileCreateRequest.java
 ##
 @@ -0,0 +1,348 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+import javax.annotation.Nonnull;
+
+import com.google.common.base.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.common.helpers.ExcludeList;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateFileRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.utils.UniqueId;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.apache.hadoop.utils.db.cache.CacheKey;
+import org.apache.hadoop.utils.db.cache.CacheValue;
+
+
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.FILE_EXISTS;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.NONE;
+
+/**
+ * Handles create file request.
+ */
+public class OMFileCreateRequest extends OMKeyCreateRequest
+implements OMKeyRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMFileCreateRequest.class);
+  public OMFileCreateRequest(OMRequest omRequest) {
+super(omRequest);
+  }
+
+
+  @Override
+  public OMRequest preExecute(OzoneManager ozoneManager) throws IOException {
+CreateFileRequest createFileRequest = 
getOmRequest().getCreateFileRequest();
+Preconditions.checkNotNull(createFileRequest);
+
+KeyArgs keyArgs = createFileRequest.getKeyArgs();
+
+if (keyArgs.getKeyName().length() == 0) {
+  // Check if this is the root of the filesystem.
+  // Not throwing exception here, as need to throw exception after
+  // checking volume/bucket exists.
+  return getOmRequest().toBuilder().setUserInfo(getUserInfo()).build();
+}
+
+long scmBlockSize = ozoneManager.getScmBlockSize();
+
+// NOTE size of a key is not a hard limit on anything, it is a value that
+// client should expect, in terms of current size of key. If client sets
+// a value, then this value is used, otherwise, we allocate a single
+// block which is the current size, if read by the client.
+final long requestedSize =

[GitHub] [hadoop] hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile 
Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507884383
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 65 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for branch |
   | +1 | mvninstall | 470 | trunk passed |
   | +1 | compile | 247 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 899 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 305 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 489 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 478 | the patch passed |
   | +1 | compile | 287 | the patch passed |
   | +1 | cc | 287 | the patch passed |
   | +1 | javac | 287 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 721 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 619 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 342 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1643 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6985 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 64811b693396 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8b0d1ad |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/9/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/9/testReport/ |
   | Max. process+thread count | 5293 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/9/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #1048: HDDS-1757. Use ExecutorService in OzoneManagerStateMachine.

2019-07-02 Thread GitBox
anuengineer merged pull request #1048: HDDS-1757. Use ExecutorService in 
OzoneManagerStateMachine.
URL: https://github.com/apache/hadoop/pull/1048
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1048: HDDS-1757. Use ExecutorService in OzoneManagerStateMachine.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1048: HDDS-1757. Use ExecutorService in 
OzoneManagerStateMachine.
URL: https://github.com/apache/hadoop/pull/1048#issuecomment-507874783
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 455 | trunk passed |
   | +1 | compile | 246 | trunk passed |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 159 | trunk passed |
   | 0 | spotbugs | 318 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 506 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 428 | the patch passed |
   | +1 | compile | 249 | the patch passed |
   | +1 | javac | 249 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 1 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 646 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | the patch passed |
   | +1 | findbugs | 521 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 241 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1183 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6061 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1048 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8a5976ede4f7 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 75b1e45 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/3/testReport/ |
   | Max. process+thread count | 5018 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16386) FindBugs warning in branch-2: GlobalStorageStatistics defines non-transient non-serializable instance field map

2019-07-02 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877338#comment-16877338
 ] 

Masatake Iwasaki commented on HADOOP-16386:
---

The previous comment was not accurate. Just running {{mvn findbugs:findbugs}} 
on openjdk8 is enough. I got the warning even if I run {{mvn compile}} on 
openjdk8.

> FindBugs warning in branch-2: GlobalStorageStatistics defines non-transient 
> non-serializable instance field map
> ---
>
> Key: HADOOP-16386
> URL: https://issues.apache.org/jira/browse/HADOOP-16386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Found in HDFS-14585
> https://builds.apache.org/job/PreCommit-HDFS-Build/27024/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html#Warnings_BAD_PRACTICE
> {quote}Class org.apache.hadoop.fs.GlobalStorageStatistics defines 
> non-transient non-serializable instance field map
> Bug type SE_BAD_FIELD (click for details) 
> In class org.apache.hadoop.fs.GlobalStorageStatistics
> Field org.apache.hadoop.fs.GlobalStorageStatistics.map
> Actual type org.apache.hadoop.fs.StorageStatistics
> In GlobalStorageStatistics.java{quote}
> {quote}SE_BAD_FIELD: Non-transient non-serializable instance field in 
> serializable class
> This Serializable class defines a non-primitive instance field which is 
> neither transient, Serializable, or java.lang.Object, and does not appear to 
> implement the Externalizable interface or the readObject() and writeObject() 
> methods.  Objects of this class will not be deserialized correctly if a 
> non-Serializable object is stored in this field.{quote}
> Looking in my inbox, this warning has been there since Feburary 9, 2019



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1048: HDDS-1757. Use ExecutorService in OzoneManagerStateMachine.

2019-07-02 Thread GitBox
anuengineer commented on issue #1048: HDDS-1757. Use ExecutorService in 
OzoneManagerStateMachine.
URL: https://github.com/apache/hadoop/pull/1048#issuecomment-507862565
 
 
   Please feel free to commit when all checks are complete. Thanks for the 
patch. Appreciate it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16386) FindBugs warning in branch-2: GlobalStorageStatistics defines non-transient non-serializable instance field map

2019-07-02 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877328#comment-16877328
 ] 

Masatake Iwasaki commented on HADOOP-16386:
---

I can reproduce this warning by running {{mvn compile}} on openjdk7 and {{mvn 
findbugs:findbugs}} on openjdk8. The cause seems to be the change of branch-2 
precommit executions to make test phase run on openjdk8.

> FindBugs warning in branch-2: GlobalStorageStatistics defines non-transient 
> non-serializable instance field map
> ---
>
> Key: HADOOP-16386
> URL: https://issues.apache.org/jira/browse/HADOOP-16386
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.10.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Found in HDFS-14585
> https://builds.apache.org/job/PreCommit-HDFS-Build/27024/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html#Warnings_BAD_PRACTICE
> {quote}Class org.apache.hadoop.fs.GlobalStorageStatistics defines 
> non-transient non-serializable instance field map
> Bug type SE_BAD_FIELD (click for details) 
> In class org.apache.hadoop.fs.GlobalStorageStatistics
> Field org.apache.hadoop.fs.GlobalStorageStatistics.map
> Actual type org.apache.hadoop.fs.StorageStatistics
> In GlobalStorageStatistics.java{quote}
> {quote}SE_BAD_FIELD: Non-transient non-serializable instance field in 
> serializable class
> This Serializable class defines a non-primitive instance field which is 
> neither transient, Serializable, or java.lang.Object, and does not appear to 
> implement the Externalizable interface or the readObject() and writeObject() 
> methods.  Objects of this class will not be deserialized correctly if a 
> non-Serializable object is stored in this field.{quote}
> Looking in my inbox, this warning has been there since Feburary 9, 2019



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
bharatviswa504 commented on a change in pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299699515
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -162,73 +164,88 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
 KeyArgs keyArgs = createKeyRequest.getKeyArgs();
 
-
 String volumeName = keyArgs.getVolumeName();
 String bucketName = keyArgs.getBucketName();
 String keyName = keyArgs.getKeyName();
 
 OMMetrics omMetrics = ozoneManager.getMetrics();
 omMetrics.incNumKeyAllocates();
 
-AuditLogger auditLogger = ozoneManager.getAuditLogger();
-
-Map auditMap = buildKeyArgsAuditMap(keyArgs);
-
-OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
-OzoneManagerProtocolProtos.Type.CreateKey).setStatus(
-OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
-
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+OmKeyInfo omKeyInfo = null;
+final List< OmKeyLocationInfo > locations = new ArrayList<>();
+FileEncryptionInfo encryptionInfo = null;
+IOException exception = null;
+boolean acquireLock = false;
 
 Review comment:
   Done. during refactoring things for OMFileCreateRequest, missed acquiring 
lock. Added that back. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1039: HDDS-1616. ManagedChannel references are being leaked in while removing RaftGroup. Contributed by Mukul Kumar Singh.

2019-07-02 Thread GitBox
bharatviswa504 merged pull request #1039: HDDS-1616. ManagedChannel references 
are being leaked in while removing RaftGroup. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1039
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1039: HDDS-1616. ManagedChannel references are being leaked in while removing RaftGroup. Contributed by Mukul Kumar Singh.

2019-07-02 Thread GitBox
bharatviswa504 commented on issue #1039: HDDS-1616. ManagedChannel references 
are being leaked in while removing RaftGroup. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1039#issuecomment-507853437
 
 
   Test failures are not related to this patch.
   Thank You @mukul1987 for the contribution.
   I have committed this to trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1048: HDDS-1757. Use ExecutorService in OzoneManagerStateMachine.

2019-07-02 Thread GitBox
bharatviswa504 commented on issue #1048: HDDS-1757. Use ExecutorService in 
OzoneManagerStateMachine.
URL: https://github.com/apache/hadoop/pull/1048#issuecomment-507848143
 
 
   Thank You @anuengineer for the review.
   As suggested, used HadoopExecutor API, and also invoked shutdown during the 
stop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile 
Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507847438
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for branch |
   | +1 | mvninstall | 464 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 845 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 348 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 561 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 473 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | cc | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 659 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | +1 | findbugs | 562 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 149 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1346 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 6262 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 5dc57384e14b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/8/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/8/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/8/testReport/ |
   | Max. process+thread count | 4551 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile 
Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507846189
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 474 | trunk passed |
   | +1 | compile | 253 | trunk passed |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 318 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 505 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | +1 | mvninstall | 427 | the patch passed |
   | +1 | compile | 245 | the patch passed |
   | +1 | cc | 245 | the patch passed |
   | +1 | javac | 245 | the patch passed |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 608 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 139 | the patch passed |
   | +1 | findbugs | 532 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 242 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1722 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6461 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 437c028a09ec 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/6/testReport/ |
   | Max. process+thread count | 4511 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile 
Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507844856
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 10 | Maven dependency ordering for branch |
   | +1 | mvninstall | 473 | trunk passed |
   | +1 | compile | 247 | trunk passed |
   | +1 | checkstyle | 56 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 778 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 201 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 125 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 428 | the patch passed |
   | +1 | compile | 247 | the patch passed |
   | +1 | cc | 247 | the patch passed |
   | +1 | javac | 247 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 599 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | +1 | findbugs | 530 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 238 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1363 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 5876 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.web.client.TestKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 48ffe2696b5f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/7/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/7/testReport/ |
   | Max. process+thread count | 2027 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1048: HDDS-1757. Use ExecutorService in OzoneManagerStateMachine.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1048: HDDS-1757. Use ExecutorService in 
OzoneManagerStateMachine.
URL: https://github.com/apache/hadoop/pull/1048#issuecomment-507843475
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 68 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 468 | trunk passed |
   | +1 | compile | 231 | trunk passed |
   | +1 | checkstyle | 70 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 880 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 305 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 494 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 430 | the patch passed |
   | +1 | compile | 253 | the patch passed |
   | +1 | javac | 253 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 605 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 335 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2108 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7316 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1048 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7ca884e2b65d 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/2/testReport/ |
   | Max. process+thread count | 3769 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1048: HDDS-1757. Use ExecutorService in OzoneManagerStateMachine.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1048: HDDS-1757. Use ExecutorService in 
OzoneManagerStateMachine.
URL: https://github.com/apache/hadoop/pull/1048#issuecomment-507840773
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 489 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | checkstyle | 69 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 954 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 355 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 581 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 472 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 746 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | the patch passed |
   | +1 | findbugs | 584 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 320 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1472 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 6929 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1048 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0a50f47d0fd3 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/1/testReport/ |
   | Max. process+thread count | 5284 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1048/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #1002: HDDS-1716. Smoketest results are generated with an internal user

2019-07-02 Thread GitBox
elek closed pull request #1002: HDDS-1716. Smoketest results are generated with 
an internal user
URL: https://github.com/apache/hadoop/pull/1002
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1045: HDDS-1741 Fix prometheus configuration in ozoneperf example cluster

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1045: HDDS-1741 Fix prometheus configuration 
in ozoneperf example cluster
URL: https://github.com/apache/hadoop/pull/1045#issuecomment-507833256
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 73 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 609 | trunk passed |
   | +1 | compile | 252 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1668 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 447 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 152 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 285 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2032 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6001 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1045 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient yamllint |
   | uname | Linux c59f15ace3e1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/2/testReport/ |
   | Max. process+thread count | 5057 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1045/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877289#comment-16877289
 ] 

Íñigo Goiri commented on HADOOP-16385:
--

Thanks [~ayushtkn] for working on this.
Committed to trunk, branch-3.2, and branch-3.1.

> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---
>
> Key: HADOOP-16385
> URL: https://issues.apache.org/jira/browse/HADOOP-16385
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: krishna reddy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16385-01.patch, HADOOP-16385-02.patch, 
> HADOOP-16385-03.patch, HADOOP-16385-HDFS_UT.patch, 
> HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error 
> "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the 
> machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN 
> went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor 
> thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be 
> positive.
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
> at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4709)
> at java.lang.Thread.run(Thread.java:748)
> 2019-06-19 05:54:07,296 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.lang.IllegalArgumentException: 247 should >= 248, and both 
> should be positi

[jira] [Commented] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877287#comment-16877287
 ] 

Hudson commented on HADOOP-16385:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16852 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16852/])
HADOOP-16385. Namenode crashes with 'RedundancyMonitor thread received 
(inigoiri: rev aa9f0850e85203b2ce4f4a8dc8968e9186cdc67a)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---
>
> Key: HADOOP-16385
> URL: https://issues.apache.org/jira/browse/HADOOP-16385
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: krishna reddy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16385-01.patch, HADOOP-16385-02.patch, 
> HADOOP-16385-03.patch, HADOOP-16385-HDFS_UT.patch, 
> HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error 
> "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the 
> machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN 
> went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor 
> thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be 
> positive.
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
> at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$Redun

[jira] [Updated] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HADOOP-16385:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.3
   3.2.1
   3.3.0
   Status: Resolved  (was: Patch Available)

> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---
>
> Key: HADOOP-16385
> URL: https://issues.apache.org/jira/browse/HADOOP-16385
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: krishna reddy
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16385-01.patch, HADOOP-16385-02.patch, 
> HADOOP-16385-03.patch, HADOOP-16385-HDFS_UT.patch, 
> HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error 
> "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the 
> machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN 
> went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor 
> thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be 
> positive.
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
> at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4709)
> at java.lang.Thread.run(Thread.java:748)
> 2019-06-19 05:54:07,296 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.lang.IllegalArgumentException: 247 should >=

[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299664639
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -162,73 +164,88 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
 KeyArgs keyArgs = createKeyRequest.getKeyArgs();
 
-
 String volumeName = keyArgs.getVolumeName();
 String bucketName = keyArgs.getBucketName();
 String keyName = keyArgs.getKeyName();
 
 OMMetrics omMetrics = ozoneManager.getMetrics();
 omMetrics.incNumKeyAllocates();
 
-AuditLogger auditLogger = ozoneManager.getAuditLogger();
-
-Map auditMap = buildKeyArgsAuditMap(keyArgs);
-
-OMResponse.Builder omResponse = OMResponse.newBuilder().setCmdType(
-OzoneManagerProtocolProtos.Type.CreateKey).setStatus(
-OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
-
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+OmKeyInfo omKeyInfo = null;
+final List< OmKeyLocationInfo > locations = new ArrayList<>();
+FileEncryptionInfo encryptionInfo = null;
+IOException exception = null;
+boolean acquireLock = false;
 
 Review comment:
   There seems to be a bug here. `acquireLock` is never set to true in any code 
path.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile 
Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507812918
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 45 | Maven dependency ordering for branch |
   | +1 | mvninstall | 489 | trunk passed |
   | +1 | compile | 261 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 870 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 393 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 584 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 19 | Maven dependency ordering for patch |
   | +1 | mvninstall | 422 | the patch passed |
   | +1 | compile | 249 | the patch passed |
   | +1 | cc | 249 | the patch passed |
   | +1 | javac | 249 | the patch passed |
   | +1 | checkstyle | 66 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 617 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 158 | the patch passed |
   | +1 | findbugs | 515 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 232 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1287 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 6274 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d5420072e60f 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/4/testReport/ |
   | Max. process+thread count | 5404 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1044: HDDS-1731. Implement File CreateFile 
Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#issuecomment-507812302
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 464 | trunk passed |
   | +1 | compile | 240 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 804 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 307 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 490 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 438 | the patch passed |
   | +1 | compile | 240 | the patch passed |
   | +1 | cc | 240 | the patch passed |
   | +1 | javac | 240 | the patch passed |
   | +1 | checkstyle | 63 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 603 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 142 | the patch passed |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 234 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1332 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6046 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1044 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d8a36bebe539 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/5/testReport/ |
   | Max. process+thread count | 5264 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1044/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
bharatviswa504 commented on a change in pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299645562
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
 ##
 @@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+
+import javax.annotation.Nonnull;
+
+/**
+ * Base class for file requests.
+ */
+public interface OMFileRequest {
+  /**
+   * Verify any files exist in the given path in the specified volume/bucket.
+   * @param omMetadataManager
+   * @param volumeName
+   * @param bucketName
+   * @param keyPath
+   * @return true - if file exist in the given path, else false.
+   * @throws IOException
+   */
+  default OMDirectoryResult verifyFilesInPath(
+  @Nonnull OMMetadataManager omMetadataManager, @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String fileNameFromDetails = omMetadataManager.getOzoneKey(volumeName,
+bucketName, keyName);
+String dirNameFromDetails = omMetadataManager.getOzoneDirKey(volumeName,
+bucketName, keyName);
+
+while (keyPath != null) {
+  String pathName = keyPath.toString();
+
+  String dbKeyName = omMetadataManager.getOzoneKey(volumeName,
+  bucketName, pathName);
+  String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+  bucketName, pathName);
+
+  if (omMetadataManager.getKeyTable().get(dbKeyName) != null) {
+// Found a file in the given path.
+// Check if this is actual file or a file in the given path
+if (dbKeyName.equals(fileNameFromDetails)) {
+  return OMDirectoryResult.FILE_EXISTS;
+} else {
+  return OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+}
+  } else if (omMetadataManager.getKeyTable().get(dbDirKeyName) != null) {
+// Found a directory in the given path.
+// Check if this is actual directory or a directory in the given path
+if (dbDirKeyName.equals(dirNameFromDetails)) {
+  return OMDirectoryResult.DIRECTORY_EXISTS;
+} else {
+  return OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+}
+  }
+  keyPath = keyPath.getParent();
+}
+
+// Found no files/ directories in the given path.
+return OMDirectoryResult.NONE;
+  }
+
+  /**
+   * Return codes used by verifyFilesInPath method.
+   */
+  enum OMDirectoryResult {
+DIRECTORY_EXISTS_IN_GIVENPATH,
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
bharatviswa504 commented on a change in pull request #1044: HDDS-1731. 
Implement File CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299645522
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
 ##
 @@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+
+import javax.annotation.Nonnull;
+
+/**
+ * Base class for file requests.
+ */
+public interface OMFileRequest {
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1048: HDDS-1757. Use ExecutorService in OzoneManagerStateMachine.

2019-07-02 Thread GitBox
bharatviswa504 opened a new pull request #1048: HDDS-1757. Use ExecutorService 
in OzoneManagerStateMachine.
URL: https://github.com/apache/hadoop/pull/1048
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1002: HDDS-1716. Smoketest results are generated with an internal user

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1002: HDDS-1716. Smoketest results are 
generated with an internal user
URL: https://github.com/apache/hadoop/pull/1002#issuecomment-507803494
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 74 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 514 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 831 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 434 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 2 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 99 | hadoop-hdds in the patch passed. |
   | +1 | unit | 170 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 3084 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1002/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1002 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 9837b950f400 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 564758a |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1002/3/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1002/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #861: HDDS-1596. Create service endpoint to download configuration from SCM

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #861: HDDS-1596. Create service endpoint to 
download configuration from SCM
URL: https://github.com/apache/hadoop/pull/861#issuecomment-507800962
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 139 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 92 | Maven dependency ordering for branch |
   | +1 | mvninstall | 680 | trunk passed |
   | +1 | compile | 317 | trunk passed |
   | +1 | checkstyle | 102 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 933 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 174 | trunk passed |
   | 0 | spotbugs | 329 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 534 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 464 | the patch passed |
   | +1 | compile | 284 | the patch passed |
   | +1 | javac | 284 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 194 | the patch passed |
   | +1 | findbugs | 629 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 360 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1736 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 7857 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.TestStorageContainerManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml yamllint shellcheck shelldocs 
|
   | uname | Linux 65864e08cb7d 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e966edd |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/7/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/7/testReport/ |
   | Max. process+thread count | 5170 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/framework hadoop-hdds/server-scm hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/ozonefs 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-861/7/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoo

[jira] [Commented] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877229#comment-16877229
 ] 

Íñigo Goiri commented on HADOOP-16385:
--

Yes, it looks like the usual suspects.
Committing to the branches.

> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---
>
> Key: HADOOP-16385
> URL: https://issues.apache.org/jira/browse/HADOOP-16385
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: krishna reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16385-01.patch, HADOOP-16385-02.patch, 
> HADOOP-16385-03.patch, HADOOP-16385-HDFS_UT.patch, 
> HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error 
> "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the 
> machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN 
> went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor 
> thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be 
> positive.
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
> at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4709)
> at java.lang.Thread.run(Thread.java:748)
> 2019-06-19 05:54:07,296 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.lang.IllegalArgumentException: 247 should >= 248, and both 
> should be positive.
> 2019-06-19 05:54:07,298 INFO 
> org.apache.hadoop.hdfs.serve

[jira] [Commented] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877209#comment-16877209
 ] 

Ayush Saxena commented on HADOOP-16385:
---

Test failures are unrelated.

> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---
>
> Key: HADOOP-16385
> URL: https://issues.apache.org/jira/browse/HADOOP-16385
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: krishna reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16385-01.patch, HADOOP-16385-02.patch, 
> HADOOP-16385-03.patch, HADOOP-16385-HDFS_UT.patch, 
> HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error 
> "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the 
> machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN 
> went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor 
> thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be 
> positive.
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
> at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4709)
> at java.lang.Thread.run(Thread.java:748)
> 2019-06-19 05:54:07,296 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.lang.IllegalArgumentException: 247 should >= 248, and both 
> should be positive.
> 2019-06-19 05:54:07,298 INFO 
> org.apache.hadoop.hdfs.server.common.HadoopAuditLogger.audit: 
>

[GitHub] [hadoop] hadoop-yetus commented on issue #979: HDDS-1698. Switch to use apache/ozone-runner in the compose/Dockerfile

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #979: HDDS-1698. Switch to use 
apache/ozone-runner in the compose/Dockerfile
URL: https://github.com/apache/hadoop/pull/979#issuecomment-507788186
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 493 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 782 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 451 | the patch passed |
   | +1 | compile | 282 | the patch passed |
   | +1 | javac | 283 | the patch passed |
   | -1 | hadolint | 3 | The patch generated 3 new + 14 unchanged - 3 fixed = 
17 total (was 17) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 690 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 261 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1433 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 74 | The patch does not generate ASF License warnings. |
   | | | 5373 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/979 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml hadolint yamllint |
   | uname | Linux 429f8ff3aace 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e966edd |
   | Default Java | 1.8.0_212 |
   | hadolint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/3/artifact/out/diff-patch-hadolint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/3/testReport/ |
   | Max. process+thread count | 4479 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-979/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299616466
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
 ##
 @@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+
+import javax.annotation.Nonnull;
+
+/**
+ * Base class for file requests.
+ */
+public interface OMFileRequest {
+  /**
+   * Verify any files exist in the given path in the specified volume/bucket.
+   * @param omMetadataManager
+   * @param volumeName
+   * @param bucketName
+   * @param keyPath
+   * @return true - if file exist in the given path, else false.
+   * @throws IOException
+   */
+  default OMDirectoryResult verifyFilesInPath(
+  @Nonnull OMMetadataManager omMetadataManager, @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String fileNameFromDetails = omMetadataManager.getOzoneKey(volumeName,
+bucketName, keyName);
+String dirNameFromDetails = omMetadataManager.getOzoneDirKey(volumeName,
+bucketName, keyName);
+
+while (keyPath != null) {
+  String pathName = keyPath.toString();
+
+  String dbKeyName = omMetadataManager.getOzoneKey(volumeName,
+  bucketName, pathName);
+  String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+  bucketName, pathName);
+
+  if (omMetadataManager.getKeyTable().get(dbKeyName) != null) {
+// Found a file in the given path.
+// Check if this is actual file or a file in the given path
+if (dbKeyName.equals(fileNameFromDetails)) {
+  return OMDirectoryResult.FILE_EXISTS;
+} else {
+  return OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+}
+  } else if (omMetadataManager.getKeyTable().get(dbDirKeyName) != null) {
+// Found a directory in the given path.
+// Check if this is actual directory or a directory in the given path
+if (dbDirKeyName.equals(dirNameFromDetails)) {
+  return OMDirectoryResult.DIRECTORY_EXISTS;
+} else {
+  return OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+}
+  }
+  keyPath = keyPath.getParent();
+}
+
+// Found no files/ directories in the given path.
+return OMDirectoryResult.NONE;
+  }
+
+  /**
+   * Return codes used by verifyFilesInPath method.
+   */
+  enum OMDirectoryResult {
+DIRECTORY_EXISTS_IN_GIVENPATH,
 
 Review comment:
   Can you add a one-line comment to explain what each of these means. I got 
confused about the meaning of `FILE_EXISTS_IN_GIVENPATH`. I think that is new.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299615242
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMFileCreateResponse.java
 ##
 @@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.file;
+
+import javax.annotation.Nullable;
+
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+
+
+
+/**
+ * Response for crate file request.
+ */
+public class OMFileCreateResponse extends OMKeyCreateResponse {
+
+  public OMFileCreateResponse(@Nullable OmKeyInfo omKeyInfo,
 
 Review comment:
   Ignore this, I was confused.
   
   ~~The annotation says `@Nullable` however if we follow the `super` calls we 
see the following assert in `OMClientResponse`:~~
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299615242
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/file/OMFileCreateResponse.java
 ##
 @@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.file;
+
+import javax.annotation.Nullable;
+
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+
+
+
+/**
+ * Response for crate file request.
+ */
+public class OMFileCreateResponse extends OMKeyCreateResponse {
+
+  public OMFileCreateResponse(@Nullable OmKeyInfo omKeyInfo,
 
 Review comment:
   The annotation says `@Nullable` however if we follow the `super` calls we 
see the following assert in `OMClientResponse`:
   ```
 public OMClientResponse(OMResponse omResponse) {
   Preconditions.checkNotNull(omResponse);
   this.omResponse = omResponse;
 }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #1044: HDDS-1731. Implement File CreateFile Request to use Cache and DoubleBuffer.

2019-07-02 Thread GitBox
arp7 commented on a change in pull request #1044: HDDS-1731. Implement File 
CreateFile Request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1044#discussion_r299609602
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
 ##
 @@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import java.io.IOException;
+import java.nio.file.Path;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+
+import javax.annotation.Nonnull;
+
+/**
+ * Base class for file requests.
+ */
+public interface OMFileRequest {
 
 Review comment:
   This should be a utility class. There is no generic behavior that is being 
implemented by the classes that implement this interface. So in this case the 
_favor composition over inheritence_ rule applies.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877194#comment-16877194
 ] 

Hadoop QA commented on HADOOP-16385:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  8m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
1s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16385 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973431/HADOOP-16385-HDFS_UT.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4a11428cd8f 4.4.0-139-generic #165-Ubuntu S

[jira] [Commented] (HADOOP-16407) Improve isolation of FS instances in S3A committer tests

2019-07-02 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877178#comment-16877178
 ] 

Steve Loughran commented on HADOOP-16407:
-

This is triggering now that in HADOOP-16384 I moved those tests which start MR 
jobs into the sequential phase. That helps stop overload of the test system 
through a spawn of too many processes (mini YARN cluster + MR job) per test 
suite, but seems to be triggering recycling problems
{code}

[INFO] Running 
org.apache.hadoop.fs.s3a.commit.staging.integration.ITestPartitionCommitMRJob
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.407 s 
<<< FAILURE! - in 
org.apache.hadoop.fs.s3a.commit.staging.integration.ITestPartitionCommitMRJob
[ERROR] 
testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITestPartitionCommitMRJob)
  Time elapsed: 0.442 s  <<< FAILURE!
java.lang.AssertionError: AWS client is not inconsistent, even though the test 
requirees it com.amazonaws.services.s3.AmazonS3Client@13d0b3d
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.fs.s3a.commit.AbstractCommitITest.setup(AbstractCommitITest.java:173)
at 
org.apache.hadoop.fs.s3a.commit.AbstractYarnClusterITest.setup(AbstractYarnClusterITest.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)

{code}

> Improve isolation of FS instances in S3A committer tests
> 
>
> Key: HADOOP-16407
> URL: https://issues.apache.org/jira/browse/HADOOP-16407
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> Filesystem recycling stops the ITest*Committer tests working all the time, if 
> they pick up an existing FS which has a consistent instance.
> We can do better here
> Options
> * Base {{AbstractCommitITest}} creates both consistent and inconsistent 
> filesystems, *does not destroy either, ever*, subclasses choose which to bond 
> to
> * test setup to force disableFilesystemCaching(conf) in config setup; tear 
> down to probe the FS for this option, and if true, close the FS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16407) Improve isolation of FS instances in S3A committer tests

2019-07-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16407:

Priority: Minor  (was: Major)

> Improve isolation of FS instances in S3A committer tests
> 
>
> Key: HADOOP-16407
> URL: https://issues.apache.org/jira/browse/HADOOP-16407
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Filesystem recycling stops the ITest*Committer tests working all the time, if 
> they pick up an existing FS which has a consistent instance.
> We can do better here
> Options
> * Base {{AbstractCommitITest}} creates both consistent and inconsistent 
> filesystems, *does not destroy either, ever*, subclasses choose which to bond 
> to
> * test setup to force disableFilesystemCaching(conf) in config setup; tear 
> down to probe the FS for this option, and if true, close the FS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16407) Improve isolation of FS instances in S3A committer tests

2019-07-02 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16407:
---

 Summary: Improve isolation of FS instances in S3A committer tests
 Key: HADOOP-16407
 URL: https://issues.apache.org/jira/browse/HADOOP-16407
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


Filesystem recycling stops the ITest*Committer tests working all the time, if 
they pick up an existing FS which has a consistent instance.

We can do better here

Options
* Base {{AbstractCommitITest}} creates both consistent and inconsistent 
filesystems, *does not destroy either, ever*, subclasses choose which to bond to
* test setup to force disableFilesystemCaching(conf) in config setup; tear down 
to probe the FS for this option, and if true, close the FS




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16406) ITestDynamoDBMetadataStore.testProvisionTable times out intermittently

2019-07-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-16406:
---

Assignee: Steve Loughran

> ITestDynamoDBMetadataStore.testProvisionTable times out intermittently
> --
>
> Key: HADOOP-16406
> URL: https://issues.apache.org/jira/browse/HADOOP-16406
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Sometimes on test runs, ITestDynamoDBMetadataStore.testProvisionTable times 
> out because AWS takes too long to resize a table.
> {code}
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 100.011 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 10 
> milliseconds
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:963)
> {code}
> Given we are moving off provisioned IO to on-demand, I propose cutting this 
> test entirely



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16406) ITestDynamoDBMetadataStore.testProvisionTable times out intermittently

2019-07-02 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877166#comment-16877166
 ] 

Steve Loughran commented on HADOOP-16406:
-

Doing this in HADOOP-16384

> ITestDynamoDBMetadataStore.testProvisionTable times out intermittently
> --
>
> Key: HADOOP-16406
> URL: https://issues.apache.org/jira/browse/HADOOP-16406
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Steve Loughran
>Priority: Minor
>
> Sometimes on test runs, ITestDynamoDBMetadataStore.testProvisionTable times 
> out because AWS takes too long to resize a table.
> {code}
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 100.011 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 10 
> milliseconds
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:963)
> {code}
> Given we are moving off provisioned IO to on-demand, I propose cutting this 
> test entirely



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16406) ITestDynamoDBMetadataStore.testProvisionTable times out intermittently

2019-07-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16406 started by Steve Loughran.
---
> ITestDynamoDBMetadataStore.testProvisionTable times out intermittently
> --
>
> Key: HADOOP-16406
> URL: https://issues.apache.org/jira/browse/HADOOP-16406
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Sometimes on test runs, ITestDynamoDBMetadataStore.testProvisionTable times 
> out because AWS takes too long to resize a table.
> {code}
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 100.011 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 10 
> milliseconds
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:963)
> {code}
> Given we are moving off provisioned IO to on-demand, I propose cutting this 
> test entirely



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16406) ITestDynamoDBMetadataStore.testProvisionTable times out intermittently

2019-07-02 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16406:
---

 Summary: ITestDynamoDBMetadataStore.testProvisionTable times out 
intermittently
 Key: HADOOP-16406
 URL: https://issues.apache.org/jira/browse/HADOOP-16406
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Reporter: Steve Loughran


Sometimes on test runs, ITestDynamoDBMetadataStore.testProvisionTable times out 
because AWS takes too long to resize a table.

{code}
[ERROR] 
testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore) 
 Time elapsed: 100.011 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 10 
milliseconds
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:963)
{code}

Given we are moving off provisioned IO to on-demand, I propose cutting this 
test entirely



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #979: HDDS-1698. Switch to use apache/ozone-runner in the compose/Dockerfile

2019-07-02 Thread GitBox
anuengineer commented on issue #979: HDDS-1698. Switch to use 
apache/ozone-runner in the compose/Dockerfile
URL: https://github.com/apache/hadoop/pull/979#issuecomment-507760683
 
 
   @elek  Thank you for the contribution. I have committed this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #979: HDDS-1698. Switch to use apache/ozone-runner in the compose/Dockerfile

2019-07-02 Thread GitBox
anuengineer merged pull request #979: HDDS-1698. Switch to use 
apache/ozone-runner in the compose/Dockerfile
URL: https://github.com/apache/hadoop/pull/979
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on issue #1010: HDFS-13694. Making md5 computing being in parallel with image loading.

2019-07-02 Thread GitBox
goiri commented on issue #1010: HDFS-13694. Making md5 computing being in 
parallel with image loading.
URL: https://github.com/apache/hadoop/pull/1010#issuecomment-507759582
 
 
   LGTM
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mackrorysd commented on a change in pull request #1009: HADOOP-16383. Pass ITtlTimeProvider instance in initialize method in …

2019-07-02 Thread GitBox
mackrorysd commented on a change in pull request #1009: HADOOP-16383. Pass 
ITtlTimeProvider instance in initialize method in …
URL: https://github.com/apache/hadoop/pull/1009#discussion_r299563868
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
 ##
 @@ -377,11 +377,12 @@ private DynamoDB createDynamoDB(
* FS via {@link S3AFileSystem#shareCredentials(String)}; this will
* increment the reference counter of these credentials.
* @param fs {@code S3AFileSystem} associated with the MetadataStore
+   * @param ttlTimeProvider
* @throws IOException on a failure
*/
   @Override
   @Retries.OnceRaw
-  public void initialize(FileSystem fs) throws IOException {
+  public void initialize(FileSystem fs, ITtlTimeProvider ttlTimeProvider) 
throws IOException {
 
 Review comment:
   Discussed offline with Gabor. Outcome of that conversation: 
bindToOwnerFileSystem doesn't exist everywhere and there isn't already a 
context created outside of the context (ha!) of certain operations. But we 
should have a context created earlier since it doesn't contain state that 
changes between operations (I actually wonder why we're creating a new instance 
for every operation instead of the metadatastore getting a permanent context). 
We need to check the context is complete enough, as this is called during FS 
initialization, precisely when the createStoreContext() javadoc warns you to be 
careful :) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-07-02 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877074#comment-16877074
 ] 

Steve Loughran commented on HADOOP-16405:
-

The existing wildfly connector has some issues which forced us to roll back 
from supporting it in the s3a code. It'd be good know if those problems had 
gone, so we could revisit that issue.

* Is there a list of fixes?
* are there any CVE fixes we need to worry about?

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Priority: Major
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-07-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16405:

Affects Version/s: 3.2.0

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Affects Versions: 3.2.0
>Reporter: Vishwajeet Dusane
>Priority: Major
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-07-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16405:

Component/s: build

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/azure
>Reporter: Vishwajeet Dusane
>Priority: Major
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-07-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16405:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-9991

> Upgrade Wildfly Openssl version to 1.0.7.Final
> --
>
> Key: HADOOP-16405
> URL: https://issues.apache.org/jira/browse/HADOOP-16405
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Vishwajeet Dusane
>Priority: Major
>
> Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
> which is essential for firewall enabled clusters along with many stability 
> related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16384) ITestS3AContractRootDir failing: inconsistent DDB tables

2019-07-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877056#comment-16877056
 ] 

Hadoop QA commented on HADOOP-16384:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
17s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 47s{color} | {color:orange} root: The patch generated 10 new + 56 unchanged 
- 2 fixed = 66 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
51s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Private method 
org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardT

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299539273
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
+They are, currently:
+
+| field | meaning | source |
+|---|-| ---|
+| `path` | path of an entry | filestatus |
+| `type` | type | filestatus |
+| `is_auth_dir` | directory entry authoritative status | metadata | 
+| `deleted` | tombstone marker | metadata | 
+| `is_empty_dir` | does the entry represent an empty directory | metadata | 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299539262
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
+They are, currently:
+
+| field | meaning | source |
+|---|-| ---|
+| `path` | path of an entry | filestatus |
+| `type` | type | filestatus |
+| `is_auth_dir` | directory entry authoritative status | metadata | 
+| `deleted` | tombstone marker | metadata | 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1003: HADOOP-16384: prune resilience.
URL: https://github.com/apache/hadoop/pull/1003#issuecomment-507723240
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 156 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 84 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1479 | trunk passed |
   | +1 | compile | 1414 | trunk passed |
   | +1 | checkstyle | 192 | trunk passed |
   | +1 | mvnsite | 177 | trunk passed |
   | +1 | shadedclient | 1225 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 116 | trunk passed |
   | 0 | spotbugs | 77 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 219 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 99 | the patch passed |
   | +1 | compile | 1241 | the patch passed |
   | +1 | javac | 1241 | the patch passed |
   | -0 | checkstyle | 167 | root: The patch generated 10 new + 56 unchanged - 
2 fixed = 66 total (was 58) |
   | +1 | mvnsite | 141 | the patch passed |
   | -1 | whitespace | 0 | The patch has 4 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 772 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 36 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 83 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 570 | hadoop-common in the patch passed. |
   | +1 | unit | 291 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 8708 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Private method 
org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable.fail(String, Throwable) is 
never called  At DumpS3GuardTable.java:never called  At 
DumpS3GuardTable.java:[line 416] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1003 |
   | JIRA Issue | HADOOP-16384 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 259b97999e7c 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e966edd |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/7/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/7/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/7/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/7/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/7/testReport/ |
   | Max. process+thread count | 1408 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/7/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299539237
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299539251
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
+They are, currently:
+
+| field | meaning | source |
+|---|-| ---|
+| `path` | path of an entry | filestatus |
+| `type` | type | filestatus |
+| `is_auth_dir` | directory entry authoritative status | metadata | 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16235) ABFS VersionedFileStatus to declare that it isEncrypted()

2019-07-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877044#comment-16877044
 ] 

Hadoop QA commented on HADOOP-16235:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16235 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973430/HADOOP-16235.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ae2eeaa4e3fe 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e966edd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16366/testReport/ |
| Max. process+thread count | 416 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16366/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS VersionedFileStatus to declare that it isEncrypted()
> -
>
> Key: HADOOP-16235
>

[jira] [Commented] (HADOOP-16384) ITestS3AContractRootDir failing: inconsistent DDB tables

2019-07-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16877038#comment-16877038
 ] 

Hadoop QA commented on HADOOP-16384:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
3s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 31s{color} | {color:orange} root: The patch generated 10 new + 56 unchanged 
- 2 fixed = 66 total (was 58) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 31s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
53s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Private method 
org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable.fai

[GitHub] [hadoop] hadoop-yetus commented on issue #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1003: HADOOP-16384: prune resilience.
URL: https://github.com/apache/hadoop/pull/1003#issuecomment-507718725
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1058 | trunk passed |
   | +1 | compile | 1062 | trunk passed |
   | +1 | checkstyle | 136 | trunk passed |
   | +1 | mvnsite | 121 | trunk passed |
   | +1 | shadedclient | 929 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 90 | trunk passed |
   | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 177 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 74 | the patch passed |
   | +1 | compile | 1019 | the patch passed |
   | +1 | javac | 1019 | the patch passed |
   | -0 | checkstyle | 151 | root: The patch generated 10 new + 56 unchanged - 
2 fixed = 66 total (was 58) |
   | +1 | mvnsite | 117 | the patch passed |
   | -1 | whitespace | 0 | The patch has 4 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 647 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 28 | hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) |
   | -1 | findbugs | 82 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 571 | hadoop-common in the patch failed. |
   | +1 | unit | 293 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6913 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Private method 
org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable.fail(String, Throwable) is 
never called  At DumpS3GuardTable.java:never called  At 
DumpS3GuardTable.java:[line 416] |
   | Failed junit tests | hadoop.ha.TestZKFailoverController |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1003 |
   | JIRA Issue | HADOOP-16384 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux ac1a89cc3a39 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e966edd |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/testReport/ |
   | Max. process+thread count | 1422 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1003/8/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299533438
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
+They are, currently:
+
+| field | meaning | source |
+|---|-| ---|
+| `path` | path of an entry | filestatus |
+| `type` | type | filestatus |
+| `is_auth_dir` | directory entry authoritative status | metadata | 
+| `deleted` | tombstone marker | metadata | 
+| `is_empty_dir` | does the entry represent an empty directory | metadata | 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299533403
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
+They are, currently:
+
+| field | meaning | source |
+|---|-| ---|
+| `path` | path of an entry | filestatus |
+| `type` | type | filestatus |
+| `is_auth_dir` | directory entry authoritative status | metadata | 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299533389
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune resilience.

2019-07-02 Thread GitBox
hadoop-yetus commented on a change in pull request #1003: HADOOP-16384: prune 
resilience.
URL: https://github.com/apache/hadoop/pull/1003#discussion_r299533417
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -1100,6 +1100,51 @@ property should be configured, and the name of that 
table should be different
  incurring AWS charges.
 
 
+### How to dump the table to a CSV file
+
+There's an unstable, unsupported command to list the contents of a table
+to a CSV, or more specifically a TSV file, on the local system
+
+```
+hadoop org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardTable s3a://bucket-x/ 
out.csv
+```
+This generates a file which can then be viewed on the command line or editor:
+
+```
+"path"  "type"  "is_auth_dir"   "deleted"   "is_empty_dir"  "len"   
"updated"   "updated_s" "last_modified" "last_modified_s"   "etag"  
"version"
+"s3a://bucket-x/FileSystemContractBaseTest"  "file"  "false" "true"  
"UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 BST 2019"  
1561483826881   "Tue Jun 25 18:30:26 BST 2019"  ""  ""
+"s3a://bucket-x/Users"   "file"  "false" "true"  "UNKNOWN"   0   
1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484376835   "Tue Jun 25 
18:39:36 BST 2019"  ""  ""
+"s3a://bucket-x/dest-6f578c72-eb40-4767-a89d-66a6a5b89578"   "file"  
"false" "true"  "UNKNOWN"   0   1561484415455   "Tue Jun 25 18:40:15 
BST 2019"  1561483757615   "Tue Jun 25 18:29:17 BST 2019"  ""  ""
+"s3a://bucket-x/file.txt""file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484382603   "Tue Jun 25 
18:39:42 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0001"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484378086   "Tue Jun 25 
18:39:38 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0002"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484380177   "Tue Jun 25 
18:39:40 BST 2019"  ""  ""
+"s3a://bucket-x/fork-0003"   "file"  "false" "true"  "UNKNOWN"   0 
  1561484415455   "Tue Jun 25 18:40:15 BST 2019"  1561484379690   "Tue Jun 25 
18:39:39 BST 2019"  ""  ""
+```
+
+This is unstable: the output format may change without warning.
+To understand the meaning of the fields, consult the documentation. 
+They are, currently:
+
+| field | meaning | source |
+|---|-| ---|
+| `path` | path of an entry | filestatus |
+| `type` | type | filestatus |
+| `is_auth_dir` | directory entry authoritative status | metadata | 
+| `deleted` | tombstone marker | metadata | 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16405) Upgrade Wildfly Openssl version to 1.0.7.Final

2019-07-02 Thread Vishwajeet Dusane (JIRA)
Vishwajeet Dusane created HADOOP-16405:
--

 Summary: Upgrade Wildfly Openssl version to 1.0.7.Final
 Key: HADOOP-16405
 URL: https://issues.apache.org/jira/browse/HADOOP-16405
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Reporter: Vishwajeet Dusane


Upgrade Wildfly Openssl version to 1.0.7.Final. This version has SNI support 
which is essential for firewall enabled clusters along with many stability 
related fixes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2019-07-02 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16876996#comment-16876996
 ] 

Steve Loughran commented on HADOOP-15679:
-

[~yumwang] -thanks; corrected

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 2.9.2, 3.0.4, 3.1.2, 2.8.6
>
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch, 
> HADOOP-15679-branch-2-001.patch, HADOOP-15679-branch-2-001.patch, 
> HADOOP-15679-branch-2-003.patch, HADOOP-15679-branch-2-003.patch, 
> HADOOP-15679-branch-2-004.patch, HADOOP-15679-branch-2-004.patch, 
> HADOOP-15679-branch-2.8-005.patch, HADOOP-15679-branch-2.8-005.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2019-07-02 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15679:

Fix Version/s: (was: 2.8.5)
   2.8.6

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 2.9.2, 3.0.4, 3.1.2, 2.8.6
>
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch, 
> HADOOP-15679-branch-2-001.patch, HADOOP-15679-branch-2-001.patch, 
> HADOOP-15679-branch-2-003.patch, HADOOP-15679-branch-2-003.patch, 
> HADOOP-15679-branch-2-004.patch, HADOOP-15679-branch-2-004.patch, 
> HADOOP-15679-branch-2.8-005.patch, HADOOP-15679-branch-2.8-005.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16876985#comment-16876985
 ] 

Ayush Saxena commented on HADOOP-16385:
---

Have uploaded a new patch touching one line in HDFS code. The main patch stilll 
stays v03.

> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---
>
> Key: HADOOP-16385
> URL: https://issues.apache.org/jira/browse/HADOOP-16385
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: krishna reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16385-01.patch, HADOOP-16385-02.patch, 
> HADOOP-16385-03.patch, HADOOP-16385-HDFS_UT.patch, 
> HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error 
> "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the 
> machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN 
> went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor 
> thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be 
> positive.
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
> at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4709)
> at java.lang.Thread.run(Thread.java:748)
> 2019-06-19 05:54:07,296 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.lang.IllegalArgumentException: 247 should >= 248, and both 
> should be positive.
> 2019-06-19 05:54:07,298 INFO 
> o

[jira] [Commented] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16876991#comment-16876991
 ] 

Steve Loughran commented on HADOOP-16404:
-

seems reasonable

* As usual, which endpoint did you run the hadoop-azure abfs tests against?
* Do think it's time to actually document all these options?


> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16385) Namenode crashes with "RedundancyMonitor thread received Runtime exception"

2019-07-02 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16385:
--
Attachment: HADOOP-16385-HDFS_UT.patch

> Namenode crashes with "RedundancyMonitor thread received Runtime exception"
> ---
>
> Key: HADOOP-16385
> URL: https://issues.apache.org/jira/browse/HADOOP-16385
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: krishna reddy
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16385-01.patch, HADOOP-16385-02.patch, 
> HADOOP-16385-03.patch, HADOOP-16385-HDFS_UT.patch, 
> HADOOP-16385.branch-3.1.001.patch
>
>
> *Description: *While removing dead nodes, Namenode went down with error 
> "RedundancyMonitor thread received Runtime exception"
> *Environment: *
> Server OS :- UBUNTU
>  No. of Cluster Node:- 1NN / 225DN's / 3ZK  / 2RM/ 4850 NMs
> total 240 machines, in each machine 21 docker containers (1 DN & 20 NM's)
> *Steps:*
> 1. Total number of containers running state : ~53000
> 2. Because of the load, machine was going to outofMemory and restarting the 
> machine and starting all the docker containers including NM's and DN's
> 3. in some point namenode throughs below error while removing a node and NN 
> went down.
> {noformat}
> 2019-06-19 05:54:07,262 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-1550/255.255.117.195:23735
> 2019-06-19 05:54:07,263 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.117.151:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,281 INFO org.apache.hadoop.net.NetworkTopology: Removing 
> a node: /rack-4097/255.255.117.151:23735
> 2019-06-19 05:54:07,282 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> removeDeadDatanode: lost heartbeat from 255.255.116.213:23735, 
> removeBlocksFromBlockMap true
> 2019-06-19 05:54:07,290 ERROR 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: RedundancyMonitor 
> thread received Runtime exception.
> java.lang.IllegalArgumentException: 247 should >= 248, and both should be 
> positive.
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:575)
> at 
> org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:552)
> at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageTypeTwoTrial(DFSNetworkTopology.java:122)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:873)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:770)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRemoteRack(BlockPlacementPolicyDefault.java:712)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:507)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:425)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargets(BlockPlacementPolicyDefault.java:311)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:290)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:143)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.chooseTarget(BlockPlacementPolicy.java:103)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:51)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReconstructionWorkForBlocks(BlockManager.java:1902)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeBlockReconstructionWork(BlockManager.java:1854)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:4842)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$RedundancyMonitor.run(BlockManager.java:4709)
> at java.lang.Thread.run(Thread.java:748)
> 2019-06-19 05:54:07,296 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: java.lang.IllegalArgumentException: 247 should >= 248, and both 
> should be positive.
> 2019-06-19 05:54:07,298 INFO 
> org.apache.hadoop.hdfs.server.common.HadoopAuditLogger.audit: 
> process=Namenode operation=shutdown

[GitHub] [hadoop] hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memor…

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1011: HDFS-14313. Get hdfs used space from 
FsDatasetImpl#volumeMap#ReplicaInfo in memor…
URL: https://github.com/apache/hadoop/pull/1011#issuecomment-507681493
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 514 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1137 | trunk passed |
   | +1 | compile | 1182 | trunk passed |
   | +1 | checkstyle | 145 | trunk passed |
   | +1 | mvnsite | 152 | trunk passed |
   | +1 | shadedclient | 1017 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 116 | trunk passed |
   | 0 | spotbugs | 186 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 310 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 111 | the patch passed |
   | +1 | compile | 1090 | the patch passed |
   | +1 | javac | 1090 | the patch passed |
   | -0 | checkstyle | 142 | root: The patch generated 3 new + 245 unchanged - 
1 fixed = 248 total (was 246) |
   | +1 | mvnsite | 153 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 641 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 108 | the patch passed |
   | +1 | findbugs | 301 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 539 | hadoop-common in the patch passed. |
   | -1 | unit | 4969 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 12722 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1011 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1fccc9fdfdc8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e966edd |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/3/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/3/testReport/ |
   | Max. process+thread count | 4353 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1011/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1047: HDDS-1750. Add block allocation metrics for pipelines in SCM

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1047: HDDS-1750. Add block allocation metrics 
for pipelines in SCM
URL: https://github.com/apache/hadoop/pull/1047#issuecomment-507677190
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 69 | Maven dependency ordering for branch |
   | +1 | mvninstall | 475 | trunk passed |
   | +1 | compile | 241 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 800 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 323 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 509 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 454 | the patch passed |
   | +1 | compile | 262 | the patch passed |
   | +1 | javac | 262 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 649 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 568 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 251 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2609 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 7577 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.container.metrics.TestContainerMetrics |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1047/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1047 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 16659bc524d1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e966edd |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1047/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1047/1/testReport/ |
   | Max. process+thread count | 3787 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1047/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16235) ABFS VersionedFileStatus to declare that it isEncrypted()

2019-07-02 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16876966#comment-16876966
 ] 

Masatake Iwasaki commented on HADOOP-16235:
---

attached 003 addressing javac and checkstyle warnings.

> ABFS VersionedFileStatus to declare that it isEncrypted()
> -
>
> Key: HADOOP-16235
> URL: https://issues.apache.org/jira/browse/HADOOP-16235
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16235.001.patch, HADOOP-16235.002.patch, 
> HADOOP-16235.003.patch
>
>
> Files in ABFS are always encrypted; have VersionedFileStatus.isEncrypted() 
> declare this, presumably just by changing the flag passed to the superclass's 
> constructor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16235) ABFS VersionedFileStatus to declare that it isEncrypted()

2019-07-02 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16235:
--
Attachment: HADOOP-16235.003.patch

> ABFS VersionedFileStatus to declare that it isEncrypted()
> -
>
> Key: HADOOP-16235
> URL: https://issues.apache.org/jira/browse/HADOOP-16235
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-16235.001.patch, HADOOP-16235.002.patch, 
> HADOOP-16235.003.patch
>
>
> Files in ABFS are always encrypted; have VersionedFileStatus.isEncrypted() 
> declare this, presumably just by changing the flag passed to the superclass's 
> constructor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1019: HDDS-1603. Handle Ratis Append Failure in Container State Machine. Contributed by Supratim Deka

2019-07-02 Thread GitBox
hadoop-yetus commented on issue #1019: HDDS-1603. Handle Ratis Append Failure 
in Container State Machine. Contributed by Supratim Deka
URL: https://github.com/apache/hadoop/pull/1019#issuecomment-507665890
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 481 | trunk passed |
   | +1 | compile | 248 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 312 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 501 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 424 | the patch passed |
   | +1 | compile | 255 | the patch passed |
   | +1 | cc | 255 | the patch passed |
   | +1 | javac | 255 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 674 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 508 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 237 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1635 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 6536 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1019 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 4cc2bbd8e0ff 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / e966edd |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/2/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/2/testReport/ |
   | Max. process+thread count | 4724 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1019/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16876925#comment-16876925
 ] 

Hadoop QA commented on HADOOP-16404:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16404 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973417/HADOOP-16404.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 902cfe23bc6a 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e966edd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16365/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16365/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://iss

[jira] [Commented] (HADOOP-16401) ABFS: port Azure doc to 3.2 branch

2019-07-02 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16876915#comment-16876915
 ] 

Hadoop QA commented on HADOOP-16401:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
18s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:63396be |
| JIRA Issue | HADOOP-16401 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973414/HADOOP-16401-branch-3.2.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 506490c1452b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / bea79e7 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 446 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16364/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ABFS: port Azure doc to 3.2 branch
> --
>
> Key: HADOOP-16401
> URL: https://issues.apache.org/jira/browse/HADOOP-16401
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HADOOP-16401-branch-3.2.001.patch
>
>
> Need to port the latest Azure markdown docs from trunk to 3.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2019-07-02 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16876884#comment-16876884
 ] 

Yuming Wang commented on HADOOP-15679:
--

Hi [~ste...@apache.org] It seems Fix Version/s should be {{2.9.2, 2.8.6, 3.0.4, 
3.1.2}}, not {{2.9.2, 2.8.5, 3.0.4, 3.1.2}}.

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 2.9.2, 2.8.5, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch, 
> HADOOP-15679-branch-2-001.patch, HADOOP-15679-branch-2-001.patch, 
> HADOOP-15679-branch-2-003.patch, HADOOP-15679-branch-2-003.patch, 
> HADOOP-15679-branch-2-004.patch, HADOOP-15679-branch-2-004.patch, 
> HADOOP-15679-branch-2.8-005.patch, HADOOP-15679-branch-2.8-005.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #1022: HDDS-1728. Add metrics for leader's latency in ContainerStateMachine. Contributed by Mukul Kumar Singh.

2019-07-02 Thread GitBox
lokeshj1703 commented on a change in pull request #1022: HDDS-1728. Add metrics 
for leader's latency in ContainerStateMachine. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1022#discussion_r299425027
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
 ##
 @@ -43,14 +46,28 @@
   private @Metric MutableCounterLong numBytesWrittenCount;
   private @Metric MutableCounterLong numBytesCommittedCount;
 
+  private @Metric MutableRate transactionLatency;
+  private MutableRate[] opsLatency;
+  private MetricsRegistry registry = null;
+
   // Failure Metrics
   private @Metric MutableCounterLong numWriteStateMachineFails;
   private @Metric MutableCounterLong numQueryStateMachineFails;
   private @Metric MutableCounterLong numApplyTransactionFails;
   private @Metric MutableCounterLong numReadStateMachineFails;
   private @Metric MutableCounterLong numReadStateMachineMissCount;
+  private @Metric MutableCounterLong numStartTransactionVerifyFailures;
+  private @Metric MutableCounterLong numContainerNotOpenVerifyFailures;
 
   public CSMMetrics() {
+int numEnumEntries = ContainerProtos.Type.values().length;
 
 Review comment:
   Can we rename numEnumEntries to numCmdTypes or sth like that?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >