[jira] [Commented] (HADOOP-15998) Ensure jar validation works on Windows.
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919196#comment-16919196 ] Rohith Sharma K S commented on HADOOP-15998: Thanks [~busbey] for committing the patch! > Ensure jar validation works on Windows. > --- > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15998) Ensure jar validation works on Windows.
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16919187#comment-16919187 ] Hudson commented on HADOOP-15998: - FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17202 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17202/]) HADOOP-15998. Ensure jar validation works on Windows. (busbey: rev d59fc59c9ffceb0494edebb3f579b3243b1e15c8) * (edit) hadoop-client-modules/hadoop-client-check-invariants/src/test/resources/ensure-jars-have-correct-contents.sh * (edit) hadoop-client-modules/hadoop-client-check-test-invariants/src/test/resources/ensure-jars-have-correct-contents.sh * (edit) hadoop-client-modules/hadoop-client-check-invariants/pom.xml * (edit) hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml > Ensure jar validation works on Windows. > --- > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15998) Ensure jar validation works on Windows.
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-15998: - Summary: Ensure jar validation works on Windows. (was: Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)) > Ensure jar validation works on Windows. > --- > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)
[ https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-15998: - Fix Version/s: 3.1.3 3.2.1 3.3.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Jar validation bash scripts don't work on Windows due to platform differences > (colons in paths, \r\n) > - > > Key: HADOOP-15998 > URL: https://issues.apache.org/jira/browse/HADOOP-15998 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.2.0, 3.3.0 > Environment: Windows 10 > Visual Studio 2017 >Reporter: Brian Grunkemeyer >Assignee: Brian Grunkemeyer >Priority: Blocker > Labels: build, windows > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > Building Hadoop fails on Windows due to a few shell scripts that make invalid > assumptions: > 1) Colon shouldn't be used to separate multiple paths in command line > parameters. Colons occur in Windows paths. > 2) Shell scripts that rely on running external tools need to deal with > carriage return - line feed differences (lines ending in \r\n, not just \n) -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16539) ABFS: Add missing query parameter for getPathStatus
[ https://issues.apache.org/jira/browse/HADOOP-16539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-16539: - Summary: ABFS: Add missing query parameter for getPathStatus (was: ABFS: Add missing query parameter for getFileStatus) > ABFS: Add missing query parameter for getPathStatus > --- > > Key: HADOOP-16539 > URL: https://issues.apache.org/jira/browse/HADOOP-16539 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Priority: Major > > When calling > [getPathStatus|https://github.com/apache/hadoop/blob/e220dac15cc9972ebdd54ea9c82f288f234fca51/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java#L356], > query parameter "action=getStatus" is missing. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16539) ABFS: Add missing query parameter for getFileStatus
Da Zhou created HADOOP-16539: Summary: ABFS: Add missing query parameter for getFileStatus Key: HADOOP-16539 URL: https://issues.apache.org/jira/browse/HADOOP-16539 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.2.0 Reporter: Da Zhou When calling [getPathStatus|https://github.com/apache/hadoop/blob/e220dac15cc9972ebdd54ea9c82f288f234fca51/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java#L356], query parameter "action=getStatus" is missing. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoxiaopan118 commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index
xiaoxiaopan118 commented on a change in pull request #1028: HDFS-14617 - Improve fsimage load time by writing sub-sections to the fsimage index URL: https://github.com/apache/hadoop/pull/1028#discussion_r319346617 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java ## @@ -883,6 +883,22 @@ public static final String DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY = "dfs.image.transfer.chunksize"; public static final int DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT = 64 * 1024; Review comment: can add annotation? // NameNode fsimage start parallel This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach…
hadoop-yetus commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach… URL: https://github.com/apache/hadoop/pull/1363#issuecomment-526440912 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 75 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 1 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 72 | Maven dependency ordering for branch | | +1 | mvninstall | 612 | trunk passed | | +1 | compile | 386 | trunk passed | | +1 | checkstyle | 85 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 866 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 182 | trunk passed | | 0 | spotbugs | 441 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 645 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 42 | Maven dependency ordering for patch | | +1 | mvninstall | 564 | the patch passed | | +1 | compile | 391 | the patch passed | | +1 | javac | 391 | the patch passed | | +1 | checkstyle | 89 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 759 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 178 | the patch passed | | +1 | findbugs | 665 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 335 | hadoop-hdds in the patch passed. | | -1 | unit | 2115 | hadoop-ozone in the patch failed. | | +1 | asflicense | 53 | The patch does not generate ASF License warnings. | | | | 8398 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.om.TestSecureOzoneManager | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.scm.node.TestQueryNode | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1363 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs | | uname | Linux fa4a4cf2f8f9 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 7bebad6 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/5/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/5/testReport/ | | Max. process+thread count | 5409 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/dist hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/5/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation.
timmylicheng commented on a change in pull request #1366: HDDS-1577. Add default pipeline placement policy implementation. URL: https://github.com/apache/hadoop/pull/1366#discussion_r319342738 ## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java ## @@ -0,0 +1,237 @@ +package org.apache.hadoop.hdds.scm.pipeline; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hdds.protocol.DatanodeDetails; +import org.apache.hadoop.hdds.protocol.proto.HddsProtos; +import org.apache.hadoop.hdds.scm.ScmConfigKeys; +import org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMCommonPolicy; +import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric; +import org.apache.hadoop.hdds.scm.exceptions.SCMException; +import org.apache.hadoop.hdds.scm.net.NetworkTopology; +import org.apache.hadoop.hdds.scm.net.Node; +import org.apache.hadoop.hdds.scm.node.NodeManager; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.stream.Collectors; + +/** + * Pipeline placement policy that choose datanodes based on load balancing and network topology + * to supply pipeline creation. + * + * 1. get a list of healthy nodes + * 2. filter out viable nodes that either don't have enough size left + *or are too heavily engaged in other pipelines + * 3. Choose an anchor node among the viable nodes which follows the algorithm + *described @SCMContainerPlacementCapacity + * 4. Choose other nodes around the anchor node based on network topology + */ +public final class PipelinePlacementPolicy extends SCMCommonPolicy { +@VisibleForTesting +static final Logger LOG = +LoggerFactory.getLogger(PipelinePlacementPolicy.class); +private final NodeManager nodeManager; +private final Configuration conf; +private final int heavy_node_criteria; + +/** + * Constructs a Container Placement with considering only capacity. + * That is this policy tries to place containers based on node weight. + * + * @param nodeManager Node Manager + * @param conf Configuration + */ +public PipelinePlacementPolicy(final NodeManager nodeManager, + final Configuration conf) { +super(nodeManager, conf); +this.nodeManager = nodeManager; +this.conf = conf; +heavy_node_criteria = conf.getInt(ScmConfigKeys.OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT, + ScmConfigKeys.OZONE_SCM_DATANODE_MAX_PIPELINE_ENGAGEMENT_DEFAULT); +} + +/** + * Returns true if this node meets the criteria. + * + * @param datanodeDetails DatanodeDetails + * @return true if we have enough space. + */ +boolean meetCriteria(DatanodeDetails datanodeDetails, + long sizeRequired) { +SCMNodeMetric nodeMetric = nodeManager.getNodeStat(datanodeDetails); +boolean hasEnoughSpace = (nodeMetric != null) && (nodeMetric.get() != null) +&& nodeMetric.get().getRemaining().hasResources(sizeRequired); +boolean loadNotTooHeavy = nodeManager.getPipelinesCount(datanodeDetails) <= heavy_node_criteria; +return hasEnoughSpace && loadNotTooHeavy; +} + +/** + * Filter out viable nodes based on + * 1. nodes that are healthy + * 2. nodes that have enough space + * 3. nodes that are not too heavily engaged in other pipelines + * @param excludedNodes - excluded nodes + * @param nodesRequired - number of datanodes required. + * @param sizeRequired - size required for the container or block. + * @return a list of viable nodes + * @throws SCMException when viable nodes are not enough in numbers + */ +List filterViableNodes(List excludedNodes, +int nodesRequired, final long sizeRequired) throws SCMException { +// get nodes in HEALTHY state +List healthyNodes = +nodeManager.getNodes(HddsProtos.NodeState.HEALTHY); +if (excludedNodes != null) { +healthyNodes.removeAll(excludedNodes); +} +String msg; +if (healthyNodes.size() == 0) { +msg = "No healthy node found to allocate container."; +LOG.error(msg); +throw new SCMException(msg, SCMException.ResultCodes +.FAILED_TO_FIND_HEALTHY_NODES); +} + +if (healthyNodes.size() < nodesRequired) { +msg = String.format("Not enough healthy nodes to allocate container. %d " ++ " datanodes required. Found %d", +nodesRequired, healthyNodes.size()); +LOG.error(msg); +throw new SCMException(msg, +
[GitHub] [hadoop] dineshchitlangia commented on issue #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on issue #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#issuecomment-526429912 failures unrelated to patch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on issue #1381: HDDS-2044. Remove 'ozone' from the recon module names.
vivekratnavel commented on issue #1381: HDDS-2044. Remove 'ozone' from the recon module names. URL: https://github.com/apache/hadoop/pull/1381#issuecomment-526426824 @shwetayakkali Can you take care of the conflicts? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on issue #1381: HDDS-2044. Remove 'ozone' from the recon module names.
vivekratnavel commented on issue #1381: HDDS-2044. Remove 'ozone' from the recon module names. URL: https://github.com/apache/hadoop/pull/1381#issuecomment-526426713 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env…
hadoop-yetus commented on issue #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env… URL: https://github.com/apache/hadoop/pull/1379#issuecomment-526425253 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 51 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 29 | Maven dependency ordering for branch | | +1 | mvninstall | 704 | trunk passed | | +1 | compile | 385 | trunk passed | | +1 | checkstyle | 86 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 872 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 171 | trunk passed | | 0 | spotbugs | 420 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 628 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 75 | Maven dependency ordering for patch | | +1 | mvninstall | 538 | the patch passed | | +1 | compile | 375 | the patch passed | | +1 | javac | 375 | the patch passed | | +1 | checkstyle | 82 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 662 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 168 | the patch passed | | +1 | findbugs | 633 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 305 | hadoop-hdds in the patch passed. | | -1 | unit | 1916 | hadoop-ozone in the patch failed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 7906 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException | | | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | | hadoop.ozone.om.TestSecureOzoneManager | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1379 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 019e9ad9a898 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3b22fcd | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/2/testReport/ | | Max. process+thread count | 5296 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/container-service hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #1314: HDFS-14748. Make DataNodePeerMetrics#minOutlierDetectionSamples configurable
jojochuang commented on a change in pull request #1314: HDFS-14748. Make DataNodePeerMetrics#minOutlierDetectionSamples configurable URL: https://github.com/apache/hadoop/pull/1314#discussion_r319331328 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodePeerMetrics.java ## @@ -58,11 +61,13 @@ * for outlier detection. If the number of samples is below this then * outlier detection is skipped. */ - @VisibleForTesting - static final long MIN_OUTLIER_DETECTION_SAMPLES = 1000; + private final long minOutlierDetectionSamples; - public DataNodePeerMetrics(final String name) { + public DataNodePeerMetrics(final String name, Configuration conf) { Review comment: This one arguably breaks compatibility. But given that this class is @InterfaceAudience.Private @InterfaceStability.Unstable I think this is acceptable. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #1314: HDFS-14748. Make DataNodePeerMetrics#minOutlierDetectionSamples configurable
jojochuang merged pull request #1314: HDFS-14748. Make DataNodePeerMetrics#minOutlierDetectionSamples configurable URL: https://github.com/apache/hadoop/pull/1314 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] shwetayakkali opened a new pull request #1381: HDDS-2044. Remove 'ozone' from the recon module names.
shwetayakkali opened a new pull request #1381: HDDS-2044. Remove 'ozone' from the recon module names. URL: https://github.com/apache/hadoop/pull/1381 Changed "ozone-recon" to "recon" and "ozone-recon-codegen" to "recon-codegen". This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer merged pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool
anuengineer merged pull request #1255: HDDS-1935. Improve the visibility with Ozone Insight tool URL: https://github.com/apache/hadoop/pull/1255 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on issue #1314: HDFS-14748. Make DataNodePeerMetrics#minOutlierDetectionSamples configurable
jojochuang commented on issue #1314: HDFS-14748. Make DataNodePeerMetrics#minOutlierDetectionSamples configurable URL: https://github.com/apache/hadoop/pull/1314#issuecomment-526403469 +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anuengineer merged pull request #1374: HDDS-2050. Error while compiling ozone-recon-web
anuengineer merged pull request #1374: HDDS-2050. Error while compiling ozone-recon-web URL: https://github.com/apache/hadoop/pull/1374 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] vivekratnavel commented on issue #1374: HDDS-2050. Error while compiling ozone-recon-web
vivekratnavel commented on issue #1374: HDDS-2050. Error while compiling ozone-recon-web URL: https://github.com/apache/hadoop/pull/1374#issuecomment-526401408 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env…
hadoop-yetus commented on issue #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env… URL: https://github.com/apache/hadoop/pull/1379#issuecomment-526397215 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 86 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 78 | Maven dependency ordering for branch | | +1 | mvninstall | 606 | trunk passed | | +1 | compile | 372 | trunk passed | | +1 | checkstyle | 72 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 964 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 187 | trunk passed | | 0 | spotbugs | 441 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 673 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 33 | Maven dependency ordering for patch | | +1 | mvninstall | 605 | the patch passed | | +1 | compile | 374 | the patch passed | | +1 | javac | 374 | the patch passed | | +1 | checkstyle | 79 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 774 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 164 | the patch passed | | +1 | findbugs | 703 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 385 | hadoop-hdds in the patch passed. | | -1 | unit | 2067 | hadoop-ozone in the patch failed. | | +1 | asflicense | 51 | The patch does not generate ASF License warnings. | | | | 8454 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules | | | hadoop.hdds.scm.pipeline.TestNode2PipelineMap | | | hadoop.ozone.om.TestSecureOzoneManager | | | hadoop.ozone.client.rpc.TestCommitWatcher | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1379 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 1422fae742f6 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f600fbb | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/1/testReport/ | | Max. process+thread count | 4280 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/container-service hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1379/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on issue #1362: HDDS-2014. Create Symmetric Key for GDPR
bharatviswa504 commented on issue #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#issuecomment-526396808 +1 LGTM. Pending CI. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
bharatviswa504 commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319286744 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java ## @@ -312,4 +312,13 @@ private OzoneConsts() { public static final int S3_BUCKET_MIN_LENGTH = 3; public static final int S3_BUCKET_MAX_LENGTH = 64; + //GDPR + public static final String GDPR_ALGORITHM_NAME = "AES"; + public static final int GDPR_RANDOM_SECRET_LENGTH = 32; Review comment: Thank You for detailed info. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319277462 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java ## @@ -312,4 +312,13 @@ private OzoneConsts() { public static final int S3_BUCKET_MIN_LENGTH = 3; public static final int S3_BUCKET_MAX_LENGTH = 64; + //GDPR + public static final String GDPR_ALGORITHM_NAME = "AES"; + public static final int GDPR_RANDOM_SECRET_LENGTH = 32; Review comment: @bharatviswa504 - Logged: [HDDS-2059](https://issues.apache.org/jira/browse/HDDS-2059) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319276053 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.security; + +import com.google.common.base.Preconditions; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.ozone.OzoneConsts; + +import java.util.HashMap; +import java.util.Map; + +import javax.crypto.Cipher; +import javax.crypto.spec.SecretKeySpec; + +/** + * Symmetric Key structure for GDPR. + */ +public class GDPRSymmetricKey { + + private SecretKeySpec secretKey; + private Cipher cipher; + private String algorithm; + private String secret; + + public SecretKeySpec getSecretKey() { +return secretKey; + } + + public Cipher getCipher() { +return cipher; + } + + /** + * Default constructor creates key with default values. + * @throws Exception + */ + public GDPRSymmetricKey() throws Exception { +algorithm = OzoneConsts.GDPR_ALGORITHM_NAME; +secret = RandomStringUtils +.randomAlphabetic(OzoneConsts.GDPR_RANDOM_SECRET_LENGTH); +this.secretKey = new SecretKeySpec( +secret.substring(0, OzoneConsts.GDPR_RANDOM_SECRET_LENGTH) +.getBytes(OzoneConsts.GDPR_CHARSET), algorithm); +this.cipher = Cipher.getInstance(algorithm); + } + + /** + * Overloaded constructor creates key with specified values. + * @throws Exception + */ + public GDPRSymmetricKey(String secret, String algorithm) throws Exception { +Preconditions.checkArgument(secret.length() == 32, +"Secret must be exactly 32 characters"); +this.secret = secret; +this.algorithm = algorithm; +this.secretKey = new SecretKeySpec( Review comment: > We are already checking at the start of the method for 32 characters length. We can replace it as below. Addressed in latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319276053 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.security; + +import com.google.common.base.Preconditions; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.ozone.OzoneConsts; + +import java.util.HashMap; +import java.util.Map; + +import javax.crypto.Cipher; +import javax.crypto.spec.SecretKeySpec; + +/** + * Symmetric Key structure for GDPR. + */ +public class GDPRSymmetricKey { + + private SecretKeySpec secretKey; + private Cipher cipher; + private String algorithm; + private String secret; + + public SecretKeySpec getSecretKey() { +return secretKey; + } + + public Cipher getCipher() { +return cipher; + } + + /** + * Default constructor creates key with default values. + * @throws Exception + */ + public GDPRSymmetricKey() throws Exception { +algorithm = OzoneConsts.GDPR_ALGORITHM_NAME; +secret = RandomStringUtils +.randomAlphabetic(OzoneConsts.GDPR_RANDOM_SECRET_LENGTH); +this.secretKey = new SecretKeySpec( +secret.substring(0, OzoneConsts.GDPR_RANDOM_SECRET_LENGTH) +.getBytes(OzoneConsts.GDPR_CHARSET), algorithm); +this.cipher = Cipher.getInstance(algorithm); + } + + /** + * Overloaded constructor creates key with specified values. + * @throws Exception + */ + public GDPRSymmetricKey(String secret, String algorithm) throws Exception { +Preconditions.checkArgument(secret.length() == 32, +"Secret must be exactly 32 characters"); +this.secret = secret; +this.algorithm = algorithm; +this.secretKey = new SecretKeySpec( Review comment: > We are already checking at the start of the method for 32 characters length. We can replace it as below. Addressed in latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319272187 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.security; + +import com.google.common.base.Preconditions; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.ozone.OzoneConsts; + +import java.util.HashMap; +import java.util.Map; + +import javax.crypto.Cipher; +import javax.crypto.spec.SecretKeySpec; + +/** + * Symmetric Key structure for GDPR. + */ +public class GDPRSymmetricKey { + + private SecretKeySpec secretKey; + private Cipher cipher; + private String algorithm; + private String secret; + + public SecretKeySpec getSecretKey() { +return secretKey; + } + + public Cipher getCipher() { +return cipher; + } + + /** + * Default constructor creates key with default values. + * @throws Exception + */ + public GDPRSymmetricKey() throws Exception { +algorithm = OzoneConsts.GDPR_ALGORITHM_NAME; +secret = RandomStringUtils +.randomAlphabetic(OzoneConsts.GDPR_RANDOM_SECRET_LENGTH); +this.secretKey = new SecretKeySpec( +secret.substring(0, OzoneConsts.GDPR_RANDOM_SECRET_LENGTH) Review comment: Good catch. I addressed this now in the latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319272187 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.security; + +import com.google.common.base.Preconditions; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.ozone.OzoneConsts; + +import java.util.HashMap; +import java.util.Map; + +import javax.crypto.Cipher; +import javax.crypto.spec.SecretKeySpec; + +/** + * Symmetric Key structure for GDPR. + */ +public class GDPRSymmetricKey { + + private SecretKeySpec secretKey; + private Cipher cipher; + private String algorithm; + private String secret; + + public SecretKeySpec getSecretKey() { +return secretKey; + } + + public Cipher getCipher() { +return cipher; + } + + /** + * Default constructor creates key with default values. + * @throws Exception + */ + public GDPRSymmetricKey() throws Exception { +algorithm = OzoneConsts.GDPR_ALGORITHM_NAME; +secret = RandomStringUtils +.randomAlphabetic(OzoneConsts.GDPR_RANDOM_SECRET_LENGTH); +this.secretKey = new SecretKeySpec( +secret.substring(0, OzoneConsts.GDPR_RANDOM_SECRET_LENGTH) Review comment: Good catch. I will replace this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319271572 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.security; + +import com.google.common.base.Preconditions; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.ozone.OzoneConsts; + +import java.util.HashMap; +import java.util.Map; + +import javax.crypto.Cipher; +import javax.crypto.spec.SecretKeySpec; + +/** + * Symmetric Key structure for GDPR. + */ +public class GDPRSymmetricKey { + + private SecretKeySpec secretKey; + private Cipher cipher; + private String algorithm; + private String secret; + + public SecretKeySpec getSecretKey() { +return secretKey; + } + + public Cipher getCipher() { +return cipher; + } + + /** + * Default constructor creates key with default values. + * @throws Exception + */ + public GDPRSymmetricKey() throws Exception { +algorithm = OzoneConsts.GDPR_ALGORITHM_NAME; +secret = RandomStringUtils +.randomAlphabetic(OzoneConsts.GDPR_RANDOM_SECRET_LENGTH); +this.secretKey = new SecretKeySpec( +secret.substring(0, OzoneConsts.GDPR_RANDOM_SECRET_LENGTH) +.getBytes(OzoneConsts.GDPR_CHARSET), algorithm); +this.cipher = Cipher.getInstance(algorithm); + } + + /** + * Overloaded constructor creates key with specified values. + * @throws Exception + */ + public GDPRSymmetricKey(String secret, String algorithm) throws Exception { +Preconditions.checkArgument(secret.length() == 32, +"Secret must be exactly 32 characters"); Review comment: So, here the instance of Key is not created by a user. It is created purely by Ozone system as today, user does not have to specify the key length/secret etc. This is just an additional check being done. This check comes into picture when user wants to read a file from GDPR Enabled bucket. During the read, the RPCClient will ideally fetch the Key metadata and from that will get the secret & algorithm. That is when the RPCClient will basically construct a symmetric key using this secret & algorithm and be able to decrypt the file for the user. This has been done basically to support the idea that in future we may want to give users the flexibility to choose their algorithm/secret. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319271572 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.security; + +import com.google.common.base.Preconditions; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.ozone.OzoneConsts; + +import java.util.HashMap; +import java.util.Map; + +import javax.crypto.Cipher; +import javax.crypto.spec.SecretKeySpec; + +/** + * Symmetric Key structure for GDPR. + */ +public class GDPRSymmetricKey { + + private SecretKeySpec secretKey; + private Cipher cipher; + private String algorithm; + private String secret; + + public SecretKeySpec getSecretKey() { +return secretKey; + } + + public Cipher getCipher() { +return cipher; + } + + /** + * Default constructor creates key with default values. + * @throws Exception + */ + public GDPRSymmetricKey() throws Exception { +algorithm = OzoneConsts.GDPR_ALGORITHM_NAME; +secret = RandomStringUtils +.randomAlphabetic(OzoneConsts.GDPR_RANDOM_SECRET_LENGTH); +this.secretKey = new SecretKeySpec( +secret.substring(0, OzoneConsts.GDPR_RANDOM_SECRET_LENGTH) +.getBytes(OzoneConsts.GDPR_CHARSET), algorithm); +this.cipher = Cipher.getInstance(algorithm); + } + + /** + * Overloaded constructor creates key with specified values. + * @throws Exception + */ + public GDPRSymmetricKey(String secret, String algorithm) throws Exception { +Preconditions.checkArgument(secret.length() == 32, +"Secret must be exactly 32 characters"); Review comment: So, here the instance of Key is not created by a user. It is created purely by Ozone system as today, user does not have to specify the key length/secret etc. This is just an additional check being done. This will check comes into picture when user wants to read a file from GDPR Enabled bucket. During the read, the RPCClient will ideally fetch the Key metadata and from that will get the secret & algorithm. That is when the RPCClient will basically construct a symmetric key using this secret & algorithm and be able to decrypt the file for the user. This has been done basically to support the idea that in future we may want to give users the flexibility to choose their algorithm/secret. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319269729 ## File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/GDPRSymmetricKey.java ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.security; + +import com.google.common.base.Preconditions; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.hadoop.ozone.OzoneConsts; + +import java.util.HashMap; +import java.util.Map; + +import javax.crypto.Cipher; +import javax.crypto.spec.SecretKeySpec; + +/** + * Symmetric Key structure for GDPR. + */ +public class GDPRSymmetricKey { + + private SecretKeySpec secretKey; + private Cipher cipher; + private String algorithm; + private String secret; + + public SecretKeySpec getSecretKey() { +return secretKey; + } + + public Cipher getCipher() { +return cipher; + } + + /** + * Default constructor creates key with default values. + * @throws Exception + */ + public GDPRSymmetricKey() throws Exception { +algorithm = OzoneConsts.GDPR_ALGORITHM_NAME; +secret = RandomStringUtils +.randomAlphabetic(OzoneConsts.GDPR_RANDOM_SECRET_LENGTH); +this.secretKey = new SecretKeySpec( +secret.substring(0, OzoneConsts.GDPR_RANDOM_SECRET_LENGTH) +.getBytes(OzoneConsts.GDPR_CHARSET), algorithm); +this.cipher = Cipher.getInstance(algorithm); + } + + /** + * Overloaded constructor creates key with specified values. + * @throws Exception + */ + public GDPRSymmetricKey(String secret, String algorithm) throws Exception { +Preconditions.checkArgument(secret.length() == 32, +"Secret must be exactly 32 characters"); +this.secret = secret; +this.algorithm = algorithm; +this.secretKey = new SecretKeySpec( Review comment: > And one more question I see this secretKey and cipher are being set, it is nowhere used. Will these be used in further jira's? Yes, they will be used in future Jiras. In future patches, if they are not used, I will ensure to remove them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319267418 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java ## @@ -312,4 +312,13 @@ private OzoneConsts() { public static final int S3_BUCKET_MIN_LENGTH = 3; public static final int S3_BUCKET_MAX_LENGTH = 64; + //GDPR + public static final String GDPR_ALGORITHM_NAME = "AES"; + public static final int GDPR_RANDOM_SECRET_LENGTH = 32; Review comment: 1. Why 32 bytes long? Random Secret Length is 32 characters. I think you mistook it as the size of the key. Given, 1 char = 8 bit, 32 chars make up 256 bits. 2. Why AES? Short answer: AES is trusted within the US NSA for sharing top secret/security information which means this algorithm is vetted for highest security clearance! Long Answer: Breaking a symmetric 256-bit key by brute force requires 2^128 times more computational power than a 128-bit key. Fifty supercomputers that could check a billion billion (10^18) AES keys per second (if such a device exists) would, in theory, require about 3×(10^51) years to exhaust the 256-bit key space. That said, every cryptography algorithm gets broken eventually, AES seems good for the foreseeable future :) Aside from this, I will still file a Jira to make the length/algorithm configurable at cluster level. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR
dineshchitlangia commented on a change in pull request #1362: HDDS-2014. Create Symmetric Key for GDPR URL: https://github.com/apache/hadoop/pull/1362#discussion_r319267418 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java ## @@ -312,4 +312,13 @@ private OzoneConsts() { public static final int S3_BUCKET_MIN_LENGTH = 3; public static final int S3_BUCKET_MAX_LENGTH = 64; + //GDPR + public static final String GDPR_ALGORITHM_NAME = "AES"; + public static final int GDPR_RANDOM_SECRET_LENGTH = 32; Review comment: 1. Why 32 bytes long? Random Secret Length is 32 characters. I think you thought of it as the size of the key. Given, 1 char = 8 bit, 32 chars make up 256 bits. 2. Why AES? Short answer: AES is trusted within the US NSA for sharing top secret/security information which means this algorithm is vetted for highest security clearance! Long Answer: Breaking a symmetric 256-bit key by brute force requires 2^128 times more computational power than a 128-bit key. Fifty supercomputers that could check a billion billion (10^18) AES keys per second (if such a device exists) would, in theory, require about 3×(10^51) years to exhaust the 256-bit key space. That said, every cryptography algorithm gets broken eventually, AES seems good for the foreseeable future :) Aside from this, I will still file a Jira to make the length/algorithm configurable at cluster level. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
steveloughran commented on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#issuecomment-526350261 thanks. I;ve just pushed up another comment for you to look at, and I am making sure I run it without s3guard as well as with. Tested, S3 ireland # undeleted files A `ITestS3AContractRootDir` run failed failing with undeleted file "fork-0005/test/testFile". That filename is used in too many tests to identify the problem. The latest patch uses a unique name for each test case, so if the problem recurs, I can start tracking down the issue. I don't think it's related to my changes in deletion, but given how critical delete is for cleanup, I am not ignoring it. At the same time, I think with this patch, we are actually being more rigourous in cleanup. We use see guard to identify and delete all files, even when S3 being inconsistent. And when the client is using s3guard as an authoritative store, we still bypass it for a final bit of due diligence. ``` [ERROR] testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 38.181 s <<< FAILURE! java.lang.AssertionError: Expected no results from listFiles(/, true), but got 1 elements: S3ALocatedFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0005/test/testFile; isDirectory=false; length=1; replication=1; blocksize=33554432; modification_time=1567098043000; access_time=0; owner=stevel; group=stevel; permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false}[eTag='55a54008ad1ba589aa210d2629c1df41', versionId=''] at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.assertNoElements(AbstractContractRootDirectoryTest.java:218) at org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testListEmptyRootDirectory(AbstractContractRootDirectoryTest.java:200) at org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRootDir.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) ``` BTW, note in the stack how S3ALocatedFileStatus now prints etag and version? There's now lossless conversion between that and S3AFileStatus, which I needed for the isDirectory flag to not get lost, as that is used in the delete operation to choose whether to add a / on a path. This why there's a new package-private ctor for S3AFileStatus. With the changes in, the file at fault appears to be `testFailedMetadataUpdate` from `ITestS3AMetadataPersistenceException`; doing more cleanup there. That test is deliberately creating failures in the metastore update process; maybe if the normal test FS doesn't know about the file then it's test clenanup doesn't find it. Now I know that empty dir markers will stop a scan for and delete of objects, I am starting to wonder if that is the cause of some intermittent test failures we've had in the past -though those could just have come from S3 list inconsistency not finding files to delete. ## Speed I added a section in the DeleteOperation about opportunities to speak up that process through better parallelisation. I also make it clear that you should only derive such changing from data created in benchmarks running in ECT itself. If you test remotely, latency can dominate, but also you are less prone to encountering throttling on AWS services, because you are not generating enough load. I don't immediately plan to do such performance tuning as I can see opportunities in
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
hadoop-yetus removed a comment on issue #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#issuecomment-525990495 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 14 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 68 | Maven dependency ordering for branch | | +1 | mvninstall | 1078 | trunk passed | | +1 | compile | 1024 | trunk passed | | +1 | checkstyle | 138 | trunk passed | | +1 | mvnsite | 128 | trunk passed | | +1 | shadedclient | 990 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 112 | trunk passed | | 0 | spotbugs | 73 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 209 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 30 | Maven dependency ordering for patch | | +1 | mvninstall | 93 | the patch passed | | +1 | compile | 979 | the patch passed | | +1 | javac | 979 | the patch passed | | -0 | checkstyle | 151 | root: The patch generated 2 new + 64 unchanged - 3 fixed = 66 total (was 67) | | +1 | mvnsite | 128 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 720 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 109 | the patch passed | | +1 | findbugs | 203 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 553 | hadoop-common in the patch passed. | | +1 | unit | 95 | hadoop-aws in the patch passed. | | +1 | asflicense | 53 | The patch does not generate ASF License warnings. | | | | 6952 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1359 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a742a228f7c8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 872cdf4 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/2/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/2/testReport/ | | Max. process+thread count | 1395 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1359/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16538) S3AFilesystem trash handling should respect the current UGI
[ https://issues.apache.org/jira/browse/HADOOP-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918927#comment-16918927 ] Steve Loughran commented on HADOOP-16538: - Maybe we should do this for every FS? That is, change the default, which is pretty crufty > S3AFilesystem trash handling should respect the current UGI > --- > > Key: HADOOP-16538 > URL: https://issues.apache.org/jira/browse/HADOOP-16538 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Siddharth Seth >Priority: Major > > S3 move to trash currently relies upon System.getProperty(user.name). > Instead, it should be relying on the current UGI to figure out the username. > getHomeDirectory needs to be overridden to use UGI instead of > System.getProperty -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16538) S3AFilesystem trash handling should respect the current UGI
[ https://issues.apache.org/jira/browse/HADOOP-16538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16538: Parent: HADOOP-15620 Issue Type: Sub-task (was: Improvement) > S3AFilesystem trash handling should respect the current UGI > --- > > Key: HADOOP-16538 > URL: https://issues.apache.org/jira/browse/HADOOP-16538 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Siddharth Seth >Priority: Major > > S3 move to trash currently relies upon System.getProperty(user.name). > Instead, it should be relying on the current UGI to figure out the username. > getHomeDirectory needs to be overridden to use UGI instead of > System.getProperty -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach…
hadoop-yetus commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach… URL: https://github.com/apache/hadoop/pull/1363#issuecomment-526342725 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 146 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 76 | Maven dependency ordering for branch | | +1 | mvninstall | 631 | trunk passed | | +1 | compile | 404 | trunk passed | | +1 | checkstyle | 77 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 927 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 164 | trunk passed | | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 637 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 37 | Maven dependency ordering for patch | | +1 | mvninstall | 588 | the patch passed | | +1 | compile | 396 | the patch passed | | +1 | javac | 396 | the patch passed | | +1 | checkstyle | 77 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 729 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 164 | the patch passed | | +1 | findbugs | 738 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 388 | hadoop-hdds in the patch passed. | | -1 | unit | 2234 | hadoop-ozone in the patch failed. | | +1 | asflicense | 104 | The patch does not generate ASF License warnings. | | | | 8695 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog | | | hadoop.ozone.TestOzoneConfigurationFields | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1363 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7f6f6a097d5b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f600fbb | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/4/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/4/testReport/ | | Max. process+thread count | 4913 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach…
hadoop-yetus commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach… URL: https://github.com/apache/hadoop/pull/1363#issuecomment-526342286 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 83 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 64 | Maven dependency ordering for branch | | +1 | mvninstall | 591 | trunk passed | | +1 | compile | 393 | trunk passed | | +1 | checkstyle | 83 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 902 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 184 | trunk passed | | 0 | spotbugs | 475 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 706 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 38 | Maven dependency ordering for patch | | +1 | mvninstall | 580 | the patch passed | | +1 | compile | 424 | the patch passed | | +1 | javac | 424 | the patch passed | | +1 | checkstyle | 85 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 699 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 182 | the patch passed | | +1 | findbugs | 669 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 253 | hadoop-hdds in the patch failed. | | -1 | unit | 2441 | hadoop-ozone in the patch failed. | | +1 | asflicense | 52 | The patch does not generate ASF License warnings. | | | | 8620 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider | | | hadoop.ozone.TestOzoneConfigurationFields | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1363 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6538f14d34bb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / f600fbb | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/3/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/3/testReport/ | | Max. process+thread count | 5058 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service hadoop-ozone/integration-test U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1363/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] The-Alchemist opened a new pull request #1380: fixed typo: CONTINOUS
The-Alchemist opened a new pull request #1380: fixed typo: CONTINOUS URL: https://github.com/apache/hadoop/pull/1380 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on issue #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env…
xiaoyuyao commented on issue #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env… URL: https://github.com/apache/hadoop/pull/1379#issuecomment-526320758 This should fix the SCM connection issue for both datanode and om when SCM starts later than OM/DN. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao opened a new pull request #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env…
xiaoyuyao opened a new pull request #1379: HDDS-2047. Datanodes fail to come up after 10 retries in a secure env… URL: https://github.com/apache/hadoop/pull/1379 …ironment. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1378: HDDS-1413. TestCloseContainerCommandHandler is flaky
hadoop-yetus commented on issue #1378: HDDS-1413. TestCloseContainerCommandHandler is flaky URL: https://github.com/apache/hadoop/pull/1378#issuecomment-526310499 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 74 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 576 | trunk passed | | +1 | compile | 380 | trunk passed | | +1 | checkstyle | 82 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1106 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 198 | trunk passed | | 0 | spotbugs | 425 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 644 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 536 | the patch passed | | +1 | compile | 366 | the patch passed | | +1 | javac | 366 | the patch passed | | +1 | checkstyle | 74 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 744 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 161 | the patch passed | | +1 | findbugs | 633 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 305 | hadoop-hdds in the patch passed. | | -1 | unit | 2084 | hadoop-ozone in the patch failed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 8159 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1378/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1378 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2f0fb23dc22b 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8e779a1 | | Default Java | 1.8.0_212 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1378/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1378/1/testReport/ | | Max. process+thread count | 4994 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1378/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on issue #1378: HDDS-1413. TestCloseContainerCommandHandler is flaky
adoroszlai commented on issue #1378: HDDS-1413. TestCloseContainerCommandHandler is flaky URL: https://github.com/apache/hadoop/pull/1378#issuecomment-526302920 @nandakumar131 please review This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] avijayanhwx commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach…
avijayanhwx commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach… URL: https://github.com/apache/hadoop/pull/1363#issuecomment-526290906 > Thanks @avijayanhwx for updating. Can we also add some tests for the added metric in TestCSMMetrics ? > Sorry for not mentioning it in the earlier review Added unit test for the metrics. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions
[ https://issues.apache.org/jira/browse/HADOOP-16430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918773#comment-16918773 ] Aaron Fabbri commented on HADOOP-16430: --- +1 LGTM after your clarifying comments.. Thanks for the contribution. > S3AFilesystem.delete to incrementally update s3guard with deletions > --- > > Key: HADOOP-16430 > URL: https://issues.apache.org/jira/browse/HADOOP-16430 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0, 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: Screenshot 2019-07-16 at 22.08.31.png > > > Currently S3AFilesystem.delete() only updates the delete at the end of a > paged delete operation. This makes it slow when there are many thousands of > files to delete ,and increases the window of vulnerability to failures > Preferred > * after every bulk DELETE call is issued to S3, queue the (async) delete of > all entries in that post. > * at the end of the delete, await the completion of these operations. > * inside S3AFS, also do the delete across threads, so that different HTTPS > connections can be used. > This should maximise DDB throughput against tables which aren't IO limited. > When executed against small IOP limited tables, the parallel DDB DELETE > batches will trigger a lot of throttling events; we should make sure these > aren't going to trigger failures -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ajfabbri commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
ajfabbri commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#discussion_r319169989 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java ## @@ -0,0 +1,191 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.List; + +import com.amazonaws.AmazonClientException; +import com.amazonaws.services.s3.model.DeleteObjectsRequest; +import com.amazonaws.services.s3.model.DeleteObjectsResult; +import com.amazonaws.services.s3.model.MultiObjectDeleteException; +import com.amazonaws.services.s3.transfer.model.CopyResult; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.InvalidRequestException; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus; +import org.apache.hadoop.fs.s3a.S3AReadOpContext; +import org.apache.hadoop.fs.s3a.S3ObjectAttributes; +import org.apache.hadoop.fs.s3a.s3guard.BulkOperationState; + +/** + * These are all the callbacks which the {@link RenameOperation} + * and {@link DeleteOperation } operations need, + * derived from the appropriate S3AFileSystem methods. + */ +public interface OperationCallbacks { + + /** + * Create the attributes of an object for subsequent use. + * @param path path path of the request. + * @param eTag the eTag of the S3 object + * @param versionId S3 object version ID + * @param len length of the file + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + Path path, + String eTag, + String versionId, + long len); + + /** + * Create the attributes of an object for subsequent use. + * @param fileStatus file status to build from. + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + S3AFileStatus fileStatus); + + /** + * Create the read context for reading from the referenced file, + * using FS state as well as the status. + * @param fileStatus file status. + * @return a context for read and select operations. + */ + S3AReadOpContext createReadContext( + FileStatus fileStatus); + + /** + * The rename has finished; perform any store cleanup operations + * such as creating/deleting directory markers. + * @param sourceRenamed renamed source + * @param destCreated destination file created. + * @throws IOException failure + */ + void finishRename(Path sourceRenamed, Path destCreated) throws IOException; + + /** + * Delete an object, also updating the metastore. + * This call does not create any mock parent entries. + * Retry policy: retry untranslated; delete considered idempotent. + * @param path path to delete + * @param key key of entry + * @param isFile is the path a file (used for instrumentation only) + * @throws AmazonClientException problems working with S3 + * @throws IOException IO failure in the metastore + */ + @Retries.RetryTranslated + void deleteObjectAtPath(Path path, String key, boolean isFile) + throws IOException; + + /** + * Recursive list of files and empty directories. + * + * @param path path to list from + * @param status optional status of path to list. + * @param collectTombstones should tombstones be collected from S3Guard? + * @param includeSelf should the listing include this path if present? + * @return an iterator. + * @throws IOException failure + */ + @Retries.RetryTranslated + RemoteIterator listFilesAndEmptyDirectories( + Path path, + S3AFileStatus status, + boolean collectTombstones, + boolean includeSelf) throws IOException; + + /** + * Copy a single object in the bucket via a COPY operation. + * There's no update of metadata, directory markers, etc. + * Callers must implement. + * @param srcKey source object
[GitHub] [hadoop] kai33 commented on a change in pull request #1368: HADOOP-16536. Backport HADOOP-15273 to branch-2
kai33 commented on a change in pull request #1368: HADOOP-16536. Backport HADOOP-15273 to branch-2 URL: https://github.com/apache/hadoop/pull/1368#discussion_r319124379 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java ## @@ -207,15 +207,30 @@ private void compareCheckSums(FileSystem sourceFS, Path source, throws IOException { if (!DistCpUtils.checksumsAreEqual(sourceFS, source, sourceChecksum, targetFS, target)) { - StringBuilder errorMessage = new StringBuilder("Check-sum mismatch between ") - .append(source).append(" and ").append(target).append("."); - if (sourceFS.getFileStatus(source).getBlockSize() != + StringBuilder errorMessage = + new StringBuilder("Checksum mismatch between ") + .append(source).append(" and ").append(target).append("."); + boolean addSkipHint = false; + String srcScheme = sourceFS.getScheme(); + String targetScheme = targetFS.getScheme(); + if (!srcScheme.equals(targetScheme) + && !(srcScheme.contains("hdfs") && targetScheme.contains("hdfs"))) { Review comment: This check will be removed once HADOOP-16158 is backported This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] kai33 commented on issue #1368: HADOOP-16536. Backport HADOOP-15273 to branch-2
kai33 commented on issue #1368: HADOOP-16536. Backport HADOOP-15273 to branch-2 URL: https://github.com/apache/hadoop/pull/1368#issuecomment-526225714 Update this PR to backport HADOOP-15273 first. Once it's merged, I'll submit another one for HADOOP-16158. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#discussion_r319117792 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java ## @@ -0,0 +1,191 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.List; + +import com.amazonaws.AmazonClientException; +import com.amazonaws.services.s3.model.DeleteObjectsRequest; +import com.amazonaws.services.s3.model.DeleteObjectsResult; +import com.amazonaws.services.s3.model.MultiObjectDeleteException; +import com.amazonaws.services.s3.transfer.model.CopyResult; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.InvalidRequestException; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus; +import org.apache.hadoop.fs.s3a.S3AReadOpContext; +import org.apache.hadoop.fs.s3a.S3ObjectAttributes; +import org.apache.hadoop.fs.s3a.s3guard.BulkOperationState; + +/** + * These are all the callbacks which the {@link RenameOperation} + * and {@link DeleteOperation } operations need, + * derived from the appropriate S3AFileSystem methods. + */ +public interface OperationCallbacks { + + /** + * Create the attributes of an object for subsequent use. + * @param path path path of the request. + * @param eTag the eTag of the S3 object + * @param versionId S3 object version ID + * @param len length of the file + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + Path path, + String eTag, + String versionId, + long len); + + /** + * Create the attributes of an object for subsequent use. + * @param fileStatus file status to build from. + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + S3AFileStatus fileStatus); + + /** + * Create the read context for reading from the referenced file, + * using FS state as well as the status. + * @param fileStatus file status. + * @return a context for read and select operations. + */ + S3AReadOpContext createReadContext( + FileStatus fileStatus); + + /** + * The rename has finished; perform any store cleanup operations + * such as creating/deleting directory markers. + * @param sourceRenamed renamed source + * @param destCreated destination file created. + * @throws IOException failure + */ + void finishRename(Path sourceRenamed, Path destCreated) throws IOException; + + /** + * Delete an object, also updating the metastore. + * This call does not create any mock parent entries. + * Retry policy: retry untranslated; delete considered idempotent. + * @param path path to delete + * @param key key of entry + * @param isFile is the path a file (used for instrumentation only) + * @throws AmazonClientException problems working with S3 + * @throws IOException IO failure in the metastore + */ + @Retries.RetryTranslated + void deleteObjectAtPath(Path path, String key, boolean isFile) + throws IOException; + + /** + * Recursive list of files and empty directories. + * + * @param path path to list from + * @param status optional status of path to list. + * @param collectTombstones should tombstones be collected from S3Guard? + * @param includeSelf should the listing include this path if present? + * @return an iterator. + * @throws IOException failure + */ + @Retries.RetryTranslated + RemoteIterator listFilesAndEmptyDirectories( + Path path, + S3AFileStatus status, + boolean collectTombstones, + boolean includeSelf) throws IOException; + + /** + * Copy a single object in the bucket via a COPY operation. + * There's no update of metadata, directory markers, etc. + * Callers must implement. + * @param srcKey source
[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#discussion_r319117008 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java ## @@ -0,0 +1,452 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import javax.annotation.Nullable; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicBoolean; + +import com.amazonaws.services.s3.model.DeleteObjectsRequest; +import com.amazonaws.services.s3.model.DeleteObjectsResult; +import com.google.common.base.Preconditions; +import com.google.common.util.concurrent.ListeningExecutorService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus; +import org.apache.hadoop.fs.s3a.Tristate; +import org.apache.hadoop.fs.s3a.s3guard.BulkOperationState; +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore; +import org.apache.hadoop.fs.s3a.s3guard.S3Guard; +import org.apache.hadoop.util.DurationInfo; + +import static com.google.common.base.Preconditions.checkArgument; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.submit; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion; + +/** + * Implementation of the delete operation. + * For an authoritative S3Guarded store, after the list and delete of the + * combined store, we repeat against raw S3. + * This will correct for any situation where the authoritative listing is + * incomplete. + */ +public class DeleteOperation extends AbstractStoreOperation { + + private static final Logger LOG = LoggerFactory.getLogger( + DeleteOperation.class); + + /** + * This is a switch to turn on when trying to debug + * deletion problems; it requests the results of + * the delete call from AWS then audits them. + */ + private static final boolean AUDIT_DELETED_KEYS = true; + + /** + * Used to stop any re-entrancy of the rename. + * This is an execute-once operation. + */ + private final AtomicBoolean executed = new AtomicBoolean(false); + + private final S3AFileStatus status; + + private final boolean recursive; + + private final OperationCallbacks callbacks; + + private final int pageSize; + + private final MetadataStore metadataStore; + + private final ListeningExecutorService executor; + + private List keys; + + private List paths; + + private CompletableFuture deleteFuture; + + private long filesDeleted; + private long extraFilesDeleted; + + /** + * Constructor. + * @param context store context + * @param status pre-fetched source status + * @param recursive recursive delete? + * @param callbacks callback provider + * @param pageSize number of entries in a page + */ + public DeleteOperation(final StoreContext context, + final S3AFileStatus status, + final boolean recursive, + final OperationCallbacks callbacks, int pageSize) { + +super(context); +this.status = status; +this.recursive = recursive; +this.callbacks = callbacks; +checkArgument(pageSize > 0 +&& pageSize <=InternalConstants.MAX_ENTRIES_TO_DELETE, +"page size out of range: %d", pageSize); +this.pageSize = pageSize; +metadataStore = context.getMetadataStore(); +executor = context.createThrottledExecutor(2); + } + + public long getFilesDeleted() { +return filesDeleted; + } + + public long getExtraFilesDeleted() { +return extraFilesDeleted; + } + + /** + * Delete a file or directory tree. + * This call does not create any fake parent directory; that is + * left to the caller. + * The actual delete call is done in a separate thread. + * Only one delete at a time is submitted, however, to
[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#discussion_r319115905 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -3770,19 +3731,54 @@ public LocatedFileStatus next() throws IOException { @Retries.RetryTranslated public RemoteIterator listFilesAndEmptyDirectories( Path f, boolean recursive) throws IOException { -return invoker.retry("list", f.toString(), true, -() -> innerListFiles(f, recursive, new Listing.AcceptAllButS3nDirs())); +return innerListFiles(f, recursive, Listing.ACCEPT_ALL_BUT_S3N, null, true); } - @Retries.OnceTranslated - private RemoteIterator innerListFiles(Path f, boolean - recursive, Listing.FileStatusAcceptor acceptor) throws IOException { + /** + * List files under the path. + * + * + * If the path is authoritative, only S3Guard will be queried. Review comment: clarified this is client-side This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU
[ https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16918669#comment-16918669 ] Thomas Demoor commented on HADOOP-16193: [~ehiggs] is correct. On AWS, the multipart put, the overwriting regular put and the delete might hit different servers, causing temporary inconsistencies. This is documented here: [https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel |https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel]Evidently, strong consistency solves this problem. What's the purpose of this test? Are we concerned about racing uploads on the same name? > add extra S3A MPU test to see what happens if a file is created during the MPU > -- > > Key: HADOOP-16193 > URL: https://issues.apache.org/jira/browse/HADOOP-16193 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.1.3 > > > Proposed extra test for the S3A MPU: if you create and then delete a file > while an MPU is in progress, when you finally complete the MPU the new data > is present. > This verifies that the other FS operations don't somehow cancel the > in-progress upload, and that eventual consistency brings the latest value out. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket
hadoop-yetus commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket URL: https://github.com/apache/hadoop/pull/1277#issuecomment-526197265 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 34 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 12 | Maven dependency ordering for branch | | +1 | mvninstall | 597 | trunk passed | | +1 | compile | 383 | trunk passed | | +1 | checkstyle | 68 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 917 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 167 | trunk passed | | 0 | spotbugs | 470 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 701 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for patch | | +1 | mvninstall | 593 | the patch passed | | +1 | compile | 394 | the patch passed | | +1 | cc | 394 | the patch passed | | +1 | javac | 394 | the patch passed | | +1 | checkstyle | 81 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 756 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 171 | the patch passed | | +1 | findbugs | 639 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 303 | hadoop-hdds in the patch passed. | | -1 | unit | 1591 | hadoop-ozone in the patch failed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 7646 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1277 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 376784018ab2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8c0759d | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/testReport/ | | Max. process+thread count | 5019 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/client hadoop-ozone/ozone-manager hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/9/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-526193536 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 176 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 807 | trunk passed | | +1 | compile | 492 | trunk passed | | +1 | checkstyle | 99 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1143 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 224 | trunk passed | | 0 | spotbugs | 506 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 772 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 678 | the patch passed | | +1 | compile | 455 | the patch passed | | +1 | javac | 455 | the patch passed | | +1 | checkstyle | 96 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 4 | The patch has no ill-formed XML file. | | +1 | shadedclient | 893 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 211 | the patch passed | | -1 | findbugs | 499 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 408 | hadoop-hdds in the patch passed. | | -1 | unit | 3186 | hadoop-ozone in the patch failed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 10640 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 263] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | | Unused field:SameKeyReader.java | | | Unused field:SameKeyReader.java | | Failed junit tests | hadoop.ozone.client.rpc.TestDeleteWithSlowFollower | | | hadoop.ozone.om.TestSecureOzoneManager | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog | | | hadoop.ozone.om.TestOMRatisSnapshots | | | hadoop.ozone.client.rpc.TestFailureHandlingByClient | | | hadoop.ozone.TestMiniChaosOzoneCluster | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 13789faf097b 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8c0759d | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/5/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/5/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/5/testReport/ | | Max. process+thread count | 3710 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#discussion_r319074080 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/DeleteOperation.java ## @@ -0,0 +1,452 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import javax.annotation.Nullable; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.atomic.AtomicBoolean; + +import com.amazonaws.services.s3.model.DeleteObjectsRequest; +import com.amazonaws.services.s3.model.DeleteObjectsResult; +import com.google.common.base.Preconditions; +import com.google.common.util.concurrent.ListeningExecutorService; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathIsNotEmptyDirectoryException; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus; +import org.apache.hadoop.fs.s3a.Tristate; +import org.apache.hadoop.fs.s3a.s3guard.BulkOperationState; +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore; +import org.apache.hadoop.fs.s3a.s3guard.S3Guard; +import org.apache.hadoop.util.DurationInfo; + +import static com.google.common.base.Preconditions.checkArgument; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.submit; +import static org.apache.hadoop.fs.s3a.impl.CallableSupplier.waitForCompletion; + +/** + * Implementation of the delete operation. + * For an authoritative S3Guarded store, after the list and delete of the + * combined store, we repeat against raw S3. + * This will correct for any situation where the authoritative listing is + * incomplete. + */ +public class DeleteOperation extends AbstractStoreOperation { + + private static final Logger LOG = LoggerFactory.getLogger( + DeleteOperation.class); + + /** + * This is a switch to turn on when trying to debug + * deletion problems; it requests the results of + * the delete call from AWS then audits them. + */ + private static final boolean AUDIT_DELETED_KEYS = true; + + /** + * Used to stop any re-entrancy of the rename. + * This is an execute-once operation. + */ + private final AtomicBoolean executed = new AtomicBoolean(false); + + private final S3AFileStatus status; + + private final boolean recursive; + + private final OperationCallbacks callbacks; + + private final int pageSize; + + private final MetadataStore metadataStore; + + private final ListeningExecutorService executor; + + private List keys; + + private List paths; + + private CompletableFuture deleteFuture; + + private long filesDeleted; + private long extraFilesDeleted; + + /** + * Constructor. + * @param context store context + * @param status pre-fetched source status + * @param recursive recursive delete? + * @param callbacks callback provider + * @param pageSize number of entries in a page + */ + public DeleteOperation(final StoreContext context, + final S3AFileStatus status, + final boolean recursive, + final OperationCallbacks callbacks, int pageSize) { + +super(context); +this.status = status; +this.recursive = recursive; +this.callbacks = callbacks; +checkArgument(pageSize > 0 +&& pageSize <=InternalConstants.MAX_ENTRIES_TO_DELETE, +"page size out of range: %d", pageSize); +this.pageSize = pageSize; +metadataStore = context.getMetadataStore(); +executor = context.createThrottledExecutor(2); + } + + public long getFilesDeleted() { +return filesDeleted; + } + + public long getExtraFilesDeleted() { +return extraFilesDeleted; + } + + /** + * Delete a file or directory tree. + * This call does not create any fake parent directory; that is + * left to the caller. + * The actual delete call is done in a separate thread. + * Only one delete at a time is submitted, however, to
[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#discussion_r319072002 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java ## @@ -0,0 +1,191 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.List; + +import com.amazonaws.AmazonClientException; +import com.amazonaws.services.s3.model.DeleteObjectsRequest; +import com.amazonaws.services.s3.model.DeleteObjectsResult; +import com.amazonaws.services.s3.model.MultiObjectDeleteException; +import com.amazonaws.services.s3.transfer.model.CopyResult; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.InvalidRequestException; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus; +import org.apache.hadoop.fs.s3a.S3AReadOpContext; +import org.apache.hadoop.fs.s3a.S3ObjectAttributes; +import org.apache.hadoop.fs.s3a.s3guard.BulkOperationState; + +/** + * These are all the callbacks which the {@link RenameOperation} + * and {@link DeleteOperation } operations need, + * derived from the appropriate S3AFileSystem methods. + */ +public interface OperationCallbacks { + + /** + * Create the attributes of an object for subsequent use. + * @param path path path of the request. + * @param eTag the eTag of the S3 object + * @param versionId S3 object version ID + * @param len length of the file + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + Path path, + String eTag, + String versionId, + long len); + + /** + * Create the attributes of an object for subsequent use. + * @param fileStatus file status to build from. + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + S3AFileStatus fileStatus); + + /** + * Create the read context for reading from the referenced file, + * using FS state as well as the status. + * @param fileStatus file status. + * @return a context for read and select operations. + */ + S3AReadOpContext createReadContext( + FileStatus fileStatus); + + /** + * The rename has finished; perform any store cleanup operations + * such as creating/deleting directory markers. + * @param sourceRenamed renamed source + * @param destCreated destination file created. + * @throws IOException failure + */ + void finishRename(Path sourceRenamed, Path destCreated) throws IOException; + + /** + * Delete an object, also updating the metastore. + * This call does not create any mock parent entries. + * Retry policy: retry untranslated; delete considered idempotent. + * @param path path to delete + * @param key key of entry + * @param isFile is the path a file (used for instrumentation only) + * @throws AmazonClientException problems working with S3 + * @throws IOException IO failure in the metastore + */ + @Retries.RetryTranslated + void deleteObjectAtPath(Path path, String key, boolean isFile) + throws IOException; + + /** + * Recursive list of files and empty directories. + * + * @param path path to list from + * @param status optional status of path to list. + * @param collectTombstones should tombstones be collected from S3Guard? + * @param includeSelf should the listing include this path if present? + * @return an iterator. + * @throws IOException failure + */ + @Retries.RetryTranslated + RemoteIterator listFilesAndEmptyDirectories( + Path path, + S3AFileStatus status, + boolean collectTombstones, + boolean includeSelf) throws IOException; + + /** + * Copy a single object in the bucket via a COPY operation. + * There's no update of metadata, directory markers, etc. + * Callers must implement. + * @param srcKey source
[GitHub] [hadoop] steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions
steveloughran commented on a change in pull request #1359: HADOOP-16430.S3AFilesystem.delete to incrementally update s3guard with deletions URL: https://github.com/apache/hadoop/pull/1359#discussion_r319071023 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/OperationCallbacks.java ## @@ -0,0 +1,191 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.impl; + +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.List; + +import com.amazonaws.AmazonClientException; +import com.amazonaws.services.s3.model.DeleteObjectsRequest; +import com.amazonaws.services.s3.model.DeleteObjectsResult; +import com.amazonaws.services.s3.model.MultiObjectDeleteException; +import com.amazonaws.services.s3.transfer.model.CopyResult; + +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.InvalidRequestException; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.apache.hadoop.fs.s3a.S3AFileStatus; +import org.apache.hadoop.fs.s3a.S3ALocatedFileStatus; +import org.apache.hadoop.fs.s3a.S3AReadOpContext; +import org.apache.hadoop.fs.s3a.S3ObjectAttributes; +import org.apache.hadoop.fs.s3a.s3guard.BulkOperationState; + +/** + * These are all the callbacks which the {@link RenameOperation} + * and {@link DeleteOperation } operations need, + * derived from the appropriate S3AFileSystem methods. + */ +public interface OperationCallbacks { + + /** + * Create the attributes of an object for subsequent use. + * @param path path path of the request. + * @param eTag the eTag of the S3 object + * @param versionId S3 object version ID + * @param len length of the file + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + Path path, + String eTag, + String versionId, + long len); + + /** + * Create the attributes of an object for subsequent use. + * @param fileStatus file status to build from. + * @return attributes to use when building the query. + */ + S3ObjectAttributes createObjectAttributes( + S3AFileStatus fileStatus); + + /** + * Create the read context for reading from the referenced file, + * using FS state as well as the status. + * @param fileStatus file status. + * @return a context for read and select operations. + */ + S3AReadOpContext createReadContext( + FileStatus fileStatus); + + /** + * The rename has finished; perform any store cleanup operations + * such as creating/deleting directory markers. + * @param sourceRenamed renamed source + * @param destCreated destination file created. + * @throws IOException failure + */ + void finishRename(Path sourceRenamed, Path destCreated) throws IOException; + + /** + * Delete an object, also updating the metastore. + * This call does not create any mock parent entries. + * Retry policy: retry untranslated; delete considered idempotent. + * @param path path to delete + * @param key key of entry + * @param isFile is the path a file (used for instrumentation only) + * @throws AmazonClientException problems working with S3 + * @throws IOException IO failure in the metastore + */ + @Retries.RetryTranslated + void deleteObjectAtPath(Path path, String key, boolean isFile) + throws IOException; + + /** + * Recursive list of files and empty directories. + * + * @param path path to list from + * @param status optional status of path to list. + * @param collectTombstones should tombstones be collected from S3Guard? + * @param includeSelf should the listing include this path if present? + * @return an iterator. + * @throws IOException failure + */ + @Retries.RetryTranslated + RemoteIterator listFilesAndEmptyDirectories( + Path path, + S3AFileStatus status, + boolean collectTombstones, + boolean includeSelf) throws IOException; + + /** + * Copy a single object in the bucket via a COPY operation. + * There's no update of metadata, directory markers, etc. + * Callers must implement. + * @param srcKey source
[GitHub] [hadoop] hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-526180163 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 88 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 636 | trunk passed | | +1 | compile | 380 | trunk passed | | +1 | checkstyle | 75 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1040 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 205 | trunk passed | | 0 | spotbugs | 528 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 779 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 718 | the patch passed | | +1 | compile | 448 | the patch passed | | +1 | javac | 448 | the patch passed | | +1 | checkstyle | 99 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 3 | The patch has no ill-formed XML file. | | +1 | shadedclient | 843 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 208 | the patch passed | | -1 | findbugs | 497 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 416 | hadoop-hdds in the patch passed. | | -1 | unit | 1448 | hadoop-ozone in the patch failed. | | +1 | asflicense | 58 | The patch does not generate ASF License warnings. | | | | 8423 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 263] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | | Unused field:SameKeyReader.java | | | Unused field:SameKeyReader.java | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux a1a0731336e9 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8c0759d | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/7/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/7/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/7/testReport/ | | Max. process+thread count | 2376 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/7/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-526179834 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 45 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 720 | trunk passed | | +1 | compile | 457 | trunk passed | | +1 | checkstyle | 82 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1057 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 168 | trunk passed | | 0 | spotbugs | 443 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 646 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 557 | the patch passed | | +1 | compile | 378 | the patch passed | | +1 | javac | 378 | the patch passed | | +1 | checkstyle | 77 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 721 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 179 | the patch passed | | -1 | findbugs | 466 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 319 | hadoop-hdds in the patch passed. | | -1 | unit | 1671 | hadoop-ozone in the patch failed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 8029 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 263] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | | Unused field:SameKeyReader.java | | | Unused field:SameKeyReader.java | | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.TestOzoneConfigurationFields | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 4161aff6ab9d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8c0759d | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/8/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/8/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/8/testReport/ | | Max. process+thread count | 5327 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/8/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail:
[GitHub] [hadoop] hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-526174133 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 43 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 585 | trunk passed | | +1 | compile | 364 | trunk passed | | +1 | checkstyle | 73 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 885 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 167 | trunk passed | | 0 | spotbugs | 415 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 620 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 528 | the patch passed | | +1 | compile | 368 | the patch passed | | +1 | javac | 368 | the patch passed | | +1 | checkstyle | 81 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 679 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 171 | the patch passed | | -1 | findbugs | 447 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 321 | hadoop-hdds in the patch passed. | | -1 | unit | 1746 | hadoop-ozone in the patch failed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 7539 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 263] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | | Unused field:SameKeyReader.java | | | Unused field:SameKeyReader.java | | Failed junit tests | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.client.rpc.TestBlockOutputStream | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.client.rpc.TestReadRetries | | | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 0e868ea3a3bd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8c0759d | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/6/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/6/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/6/testReport/ | | Max. process+thread count | 4138 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/6/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact
[GitHub] [hadoop] hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus commented on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-526172825 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 555 | trunk passed | | +1 | compile | 368 | trunk passed | | +1 | checkstyle | 74 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 858 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 170 | trunk passed | | 0 | spotbugs | 412 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 608 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 550 | the patch passed | | +1 | compile | 394 | the patch passed | | +1 | javac | 394 | the patch passed | | +1 | checkstyle | 78 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 686 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 173 | the patch passed | | -1 | findbugs | 443 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 321 | hadoop-hdds in the patch passed. | | -1 | unit | 2211 | hadoop-ozone in the patch failed. | | +1 | asflicense | 53 | The patch does not generate ASF License warnings. | | | | 7975 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 263] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | | Unused field:SameKeyReader.java | | | Unused field:SameKeyReader.java | | Failed junit tests | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.TestStorageContainerManager | | | hadoop.hdds.scm.pipeline.TestNode2PipelineMap | | | hadoop.hdds.scm.pipeline.TestNodeFailure | | | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestroy | | | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider | | | hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 9fa56b5cb164 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/4/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/4/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/4/testReport/ | | Max. process+thread count | 4990 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#discussion_r319030010 ## File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh ## @@ -0,0 +1,66 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +REPORT_DIR=${REPORT_DIR:-$PWD} + +## generate summary txt file +find "." -name 'TEST*.xml' -print0 \ +| xargs -n1 -0 "grep" -l -E "
[GitHub] [hadoop] hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-526154040 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 43 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | 0 | shelldocs | 1 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 592 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 804 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 554 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 1 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 706 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 114 | hadoop-hdds in the patch passed. | | +1 | unit | 294 | hadoop-ozone in the patch passed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 3386 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/20/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 4c482d75bc34 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8c0759d | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/20/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/20/testReport/ | | Max. process+thread count | 447 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/20/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1229#issuecomment-524975812 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 627 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 1711 | Maven dependency ordering for branch | | +1 | mvninstall | | trunk passed | | +1 | compile | 1008 | trunk passed | | +1 | checkstyle | 145 | trunk passed | | +1 | mvnsite | 133 | trunk passed | | +1 | shadedclient | 1006 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 114 | trunk passed | | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 192 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | +1 | mvninstall | 82 | the patch passed | | +1 | compile | 942 | the patch passed | | +1 | javac | 942 | the patch passed | | +1 | checkstyle | 146 | root: The patch generated 0 new + 97 unchanged - 2 fixed = 97 total (was 99) | | +1 | mvnsite | 130 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 672 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 114 | the patch passed | | +1 | findbugs | 206 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 547 | hadoop-common in the patch passed. | | +1 | unit | 93 | hadoop-aws in the patch passed. | | +1 | asflicense | 49 | The patch does not generate ASF License warnings. | | | | 9096 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1229 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 2e49e7b6df2e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 689d2e6 | | Default Java | 1.8.0_222 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/16/testReport/ | | Max. process+thread count | 1508 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/16/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1229#issuecomment-524081753 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 74 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for branch | | +1 | mvninstall | 1211 | trunk passed | | +1 | compile | 1083 | trunk passed | | +1 | checkstyle | 159 | trunk passed | | +1 | mvnsite | 127 | trunk passed | | +1 | shadedclient | | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 100 | trunk passed | | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 191 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for patch | | +1 | mvninstall | 82 | the patch passed | | +1 | compile | 1019 | the patch passed | | +1 | javac | 1019 | the patch passed | | -0 | checkstyle | 155 | root: The patch generated 1 new + 85 unchanged - 2 fixed = 86 total (was 87) | | +1 | mvnsite | 126 | the patch passed | | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 769 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 98 | the patch passed | | +1 | findbugs | 206 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 609 | hadoop-common in the patch failed. | | +1 | unit | 89 | hadoop-aws in the patch passed. | | +1 | asflicense | 47 | The patch does not generate ASF License warnings. | | | | 7303 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder | | | hadoop.ha.TestZKFailoverController | | | hadoop.fs.shell.TestCopy | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/14/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1229 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 12487ab0521f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 28fb4b5 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/14/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/14/artifact/out/whitespace-eol.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/14/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/14/testReport/ | | Max. process+thread count | 1345 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/14/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1229#issuecomment-523914399 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 47 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 67 | Maven dependency ordering for branch | | +1 | mvninstall | 1131 | trunk passed | | +1 | compile | 1074 | trunk passed | | +1 | checkstyle | 132 | trunk passed | | +1 | mvnsite | 109 | trunk passed | | +1 | shadedclient | 910 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 83 | trunk passed | | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 162 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 22 | Maven dependency ordering for patch | | +1 | mvninstall | 71 | the patch passed | | +1 | compile | 1060 | the patch passed | | +1 | javac | 1060 | the patch passed | | -0 | checkstyle | 136 | root: The patch generated 1 new + 46 unchanged - 2 fixed = 47 total (was 48) | | +1 | mvnsite | 102 | the patch passed | | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 642 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 84 | the patch passed | | +1 | findbugs | 182 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 504 | hadoop-common in the patch failed. | | +1 | unit | 70 | hadoop-aws in the patch passed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 6639 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.shell.TestCopy | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1229 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 3eb825637415 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 69ddb36 | | Default Java | 1.8.0_212 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/13/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/13/artifact/out/whitespace-eol.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/13/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/13/testReport/ | | Max. process+thread count | 1408 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/13/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy
hadoop-yetus removed a comment on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy URL: https://github.com/apache/hadoop/pull/1229#issuecomment-524082155 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 36 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for branch | | +1 | mvninstall | 1251 | trunk passed | | +1 | compile | 1138 | trunk passed | | +1 | checkstyle | 147 | trunk passed | | +1 | mvnsite | 123 | trunk passed | | +1 | shadedclient | 1086 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 99 | trunk passed | | 0 | spotbugs | 69 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 193 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for patch | | +1 | mvninstall | 82 | the patch passed | | +1 | compile | 1019 | the patch passed | | +1 | javac | 1019 | the patch passed | | -0 | checkstyle | 156 | root: The patch generated 1 new + 85 unchanged - 2 fixed = 86 total (was 87) | | +1 | mvnsite | 121 | the patch passed | | -1 | whitespace | 1 | The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 739 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 90 | the patch passed | | +1 | findbugs | 219 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 550 | hadoop-common in the patch failed. | | +1 | unit | 73 | hadoop-aws in the patch passed. | | +1 | asflicense | 42 | The patch does not generate ASF License warnings. | | | | 7221 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.viewfs.TestViewFsTrash | | | hadoop.fs.shell.TestCopy | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1229 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux aa370dfb0df7 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 28fb4b5 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/15/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/15/artifact/out/whitespace-eol.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/15/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/15/testReport/ | | Max. process+thread count | 1464 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/15/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#discussion_r319014650 ## File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh ## @@ -0,0 +1,67 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +REPORT_DIR=${REPORT_DIR:-$PWD} + +## generate summary txt file +find "." -name 'TEST*.xml' -print0 \ +| xargs -n1 -0 "grep" -l -E "
[GitHub] [hadoop] hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-526141210 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 81 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 648 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 836 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 606 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 0 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 752 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 115 | hadoop-hdds in the patch passed. | | +1 | unit | 308 | hadoop-ozone in the patch passed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 3632 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/19/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 4cbf21044398 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/19/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/19/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/19/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16534) Exclude submarine from hadoop source build
[ https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HADOOP-16534: - Status: Patch Available (was: Open) > Exclude submarine from hadoop source build > -- > > Key: HADOOP-16534 > URL: https://issues.apache.org/jira/browse/HADOOP-16534 > Project: Hadoop Common > Issue Type: Bug >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > > When we do source package of hadoop, it should not contain submarine > project/code. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-526137947 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | 0 | shelldocs | 1 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 585 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 799 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 571 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 2 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 838 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 106 | hadoop-hdds in the patch passed. | | +1 | unit | 335 | hadoop-ozone in the patch passed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 3553 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/18/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux d0f4a3e63965 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/18/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/18/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/18/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#discussion_r319011019 ## File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh ## @@ -0,0 +1,67 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +REPORT_DIR=${REPORT_DIR:-$PWD} + +## generate summary txt file +find "." -name 'TEST*.xml' -print0 \ +| xargs -n1 -0 "grep" -l -E "
[GitHub] [hadoop] nandakumar131 opened a new pull request #1376: HDDS-2058. Remove hadoop dependencies in ozone build.
nandakumar131 opened a new pull request #1376: HDDS-2058. Remove hadoop dependencies in ozone build. URL: https://github.com/apache/hadoop/pull/1376 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1375: State check during container state transition in datanode should be lock protected
hadoop-yetus commented on issue #1375: State check during container state transition in datanode should be lock protected URL: https://github.com/apache/hadoop/pull/1375#issuecomment-526136158 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 259 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 667 | trunk passed | | +1 | compile | 428 | trunk passed | | +1 | checkstyle | 80 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1025 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 186 | trunk passed | | 0 | spotbugs | 466 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 702 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 629 | the patch passed | | +1 | compile | 387 | the patch passed | | +1 | javac | 387 | the patch passed | | +1 | checkstyle | 86 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 661 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 172 | the patch passed | | +1 | findbugs | 656 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 338 | hadoop-hdds in the patch passed. | | -1 | unit | 1885 | hadoop-ozone in the patch failed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 8407 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields | | | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1375 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d65163c71451 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/1/testReport/ | | Max. process+thread count | 4580 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/container-service U: hadoop-hdds/container-service | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1375/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1255: HDDS-1935. Improve the visibility with Ozone Insight tool
hadoop-yetus commented on issue #1255: HDDS-1935. Improve the visibility with Ozone Insight tool URL: https://github.com/apache/hadoop/pull/1255#issuecomment-526135834 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 68 | Maven dependency ordering for branch | | +1 | mvninstall | 592 | trunk passed | | +1 | compile | 380 | trunk passed | | +1 | checkstyle | 79 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 871 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 180 | trunk passed | | 0 | spotbugs | 449 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 668 | trunk passed | | -0 | patch | 499 | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 40 | Maven dependency ordering for patch | | +1 | mvninstall | 566 | the patch passed | | +1 | compile | 463 | the patch passed | | +1 | javac | 463 | the patch passed | | +1 | checkstyle | 103 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 38 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 12 | The patch has no ill-formed XML file. | | +1 | shadedclient | 892 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 226 | the patch passed | | +1 | findbugs | 851 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 358 | hadoop-hdds in the patch passed. | | -1 | unit | 1971 | hadoop-ozone in the patch failed. | | +1 | asflicense | 55 | The patch does not generate ASF License warnings. | | | | 8718 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestContainerStateMachine | | | hadoop.hdds.scm.pipeline.TestSCMPipelineManager | | | hadoop.ozone.om.snapshot.TestOzoneManagerSnapshotProvider | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.om.TestOzoneManagerHA | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1255 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs | | uname | Linux 5bbd24f32b4f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/10/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/10/testReport/ | | Max. process+thread count | 4300 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-hdds/config hadoop-hdds/framework hadoop-hdds/server-scm hadoop-ozone hadoop-ozone/common hadoop-ozone/dist hadoop-ozone/insight hadoop-ozone/ozone-manager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1255/10/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#discussion_r318998112 ## File path: hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java ## @@ -0,0 +1,109 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.freon; + +import java.io.InputStream; +import java.security.MessageDigest; +import java.util.concurrent.Callable; + +import org.apache.hadoop.hdds.cli.HddsVersionProvider; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; + +import com.codahale.metrics.Timer; +import org.apache.commons.io.IOUtils; +import picocli.CommandLine.Command; +import picocli.CommandLine.Option; + +/** + * Data generator tool test om performance. + */ +@Command(name = "ocokr", +aliases = "ozone-client-one-key-reader", +description = "Read the same key from multiple threads.", +versionProvider = HddsVersionProvider.class, +mixinStandardHelpOptions = true, +showDefaultValues = true) +public class SameKeyReader extends BaseFreonGenerator +implements Callable { + + @Option(names = {"-v", "--volume"}, + description = "Name of the bucket which contains the test data. Will be" + + " created if missing.", + defaultValue = "vol1") + private String volumeName; + + @Option(names = {"-b", "--bucket"}, + description = "Name of the bucket which contains the test data. Will be" + + " created if missing.", + defaultValue = "bucket1") + private String bucketName; + + @Option(names = {"-k", "--key"}, + description = "Name of the key read from multiple threads", + defaultValue = "bucket1") Review comment: This test needs an existing key to read. I'm not sure first-time users would create a key named `bucket1`. If we assume the most frequent usage will be validating one of the keys created by `ockg`, then it could default to `/0`, but we'd need some specific prefix for that. Alternatively, we could pick some existing key from the given volume/bucket, which works with random prefix, too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus removed a comment on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-524270997 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 78 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 628 | trunk passed | | +1 | compile | 368 | trunk passed | | +1 | checkstyle | 71 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 960 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 179 | trunk passed | | 0 | spotbugs | 518 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 766 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 584 | the patch passed | | +1 | compile | 398 | the patch passed | | +1 | javac | 398 | the patch passed | | +1 | checkstyle | 78 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 762 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | the patch passed | | -1 | findbugs | 537 | hadoop-ozone generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 383 | hadoop-hdds in the patch passed. | | -1 | unit | 3403 | hadoop-ozone in the patch failed. | | +1 | asflicense | 55 | The patch does not generate ASF License warnings. | | | | 9835 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 259] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | Failed junit tests | hadoop.ozone.container.server.TestSecureContainerServer | | | hadoop.hdds.scm.pipeline.TestNode2PipelineMap | | | hadoop.hdds.scm.pipeline.TestNodeFailure | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | | hadoop.ozone.om.TestOzoneManagerHA | | | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption | | | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.TestMiniOzoneCluster | | | hadoop.hdds.scm.pipeline.TestSCMRestart | | | hadoop.ozone.TestMiniChaosOzoneCluster | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux a7e864d92109 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / bd7baea | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/1/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/1/testReport/ | | Max. process+thread count | 3993 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus removed a comment on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-524399971 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 98 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 650 | trunk passed | | +1 | compile | 362 | trunk passed | | +1 | checkstyle | 65 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 823 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 152 | trunk passed | | 0 | spotbugs | 420 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 609 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 549 | the patch passed | | +1 | compile | 362 | the patch passed | | +1 | javac | 362 | the patch passed | | +1 | checkstyle | 69 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 671 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 152 | the patch passed | | -1 | findbugs | 426 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 219 | hadoop-hdds in the patch failed. | | -1 | unit | 2613 | hadoop-ozone in the patch failed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 8234 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 263] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | | Unused field:SameKeyReader.java | | | Unused field:SameKeyReader.java | | Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainer | | | hadoop.ozone.container.server.TestSecureContainerServer | | | hadoop.ozone.TestStorageContainerManager | | | hadoop.ozone.client.rpc.TestCommitWatcher | | | hadoop.ozone.client.rpc.Test2WayCommitInRatis | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 03d0f0374a50 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c92de82 | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/2/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/2/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/2/testReport/ | | Max. process+thread count | 4876 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an
[GitHub] [hadoop] hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-526124532 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 627 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 874 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 565 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 729 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 107 | hadoop-hdds in the patch passed. | | +1 | unit | 283 | hadoop-ozone in the patch passed. | | +1 | asflicense | 42 | The patch does not generate ASF License warnings. | | | | 3475 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/17/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux b2d4219b6362 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/17/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/17/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1341: HDDS-2022. Add additional freon tests
hadoop-yetus removed a comment on issue #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#issuecomment-525200832 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 86 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 662 | trunk passed | | +1 | compile | 387 | trunk passed | | +1 | checkstyle | 86 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 975 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 170 | trunk passed | | 0 | spotbugs | 517 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 751 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 631 | the patch passed | | +1 | compile | 433 | the patch passed | | +1 | javac | 433 | the patch passed | | +1 | checkstyle | 97 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 862 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 217 | the patch passed | | -1 | findbugs | 487 | hadoop-ozone generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 368 | hadoop-hdds in the patch passed. | | -1 | unit | 2767 | hadoop-ozone in the patch failed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 9509 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-ozone | | | Possible null pointer dereference of volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:volume in org.apache.hadoop.ozone.freon.BaseFreonGenerator.ensureVolumeAndBucketExist(OzoneConfiguration, String, String) on exception path Dereferenced at BaseFreonGenerator.java:[line 263] | | | Unused field:OzoneClientKeyValidator.java | | | Unused field:OzoneClientKeyValidator.java | | | Dead store to configuration in org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:org.apache.hadoop.ozone.freon.S3KeyGenerator.call() At S3KeyGenerator.java:[line 78] | | | Unused field:SameKeyReader.java | | | Unused field:SameKeyReader.java | | Failed junit tests | hadoop.ozone.container.server.TestSecureContainerServer | | | hadoop.ozone.client.rpc.TestOzoneRpcClientForAclAuditLog | | | hadoop.hdds.scm.container.TestContainerStateManagerIntegration | | | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=18.09.7 Server=18.09.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1341 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux addb1933f273 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b69ac57 | | Default Java | 1.8.0_222 | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/3/artifact/out/new-findbugs-hadoop-ozone.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/3/testReport/ | | Max. process+thread count | 5048 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1341/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact
[GitHub] [hadoop] elek commented on a change in pull request #1341: HDDS-2022. Add additional freon tests
elek commented on a change in pull request #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#discussion_r318994164 ## File path: hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java ## @@ -0,0 +1,325 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.freon; + +import java.io.IOException; +import java.io.InputStream; +import java.net.InetSocketAddress; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.ipc.Client; +import org.apache.hadoop.ipc.ProtobufRpcEngine; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.net.NetUtils; +import org.apache.hadoop.ozone.OmUtils; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; +import org.apache.hadoop.ozone.client.OzoneVolume; +import org.apache.hadoop.ozone.om.exceptions.OMException; +import org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes; +import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB; +import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolPB; +import org.apache.hadoop.security.UserGroupInformation; + +import com.codahale.metrics.ConsoleReporter; +import com.codahale.metrics.MetricRegistry; +import org.apache.commons.codec.digest.DigestUtils; +import org.apache.commons.lang3.RandomStringUtils; +import org.apache.ratis.protocol.ClientId; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import picocli.CommandLine.Option; +import picocli.CommandLine.ParentCommand; + +/** + * Base class for simplified performance tests. + */ +public class BaseFreonGenerator { + + private static final Logger LOG = + LoggerFactory.getLogger(BaseFreonGenerator.class); + + private static final int CHECK_INTERVAL_MILLIS = 1000; + + private static final String DIGEST_ALGORITHM = "MD5"; + + private static final Pattern ENV_VARIABLE_IN_PATTERN = + Pattern.compile("__(.+?)__"); + + @ParentCommand + private Freon freonCommand; + + @Option(names = {"-n", "--number-of-tests"}, + description = "Number of the generated objects.", + defaultValue = "1000") + private long testNo = 1000; + + @Option(names = {"-t", "--threads", "--thread"}, + description = "Number of threads used to execute", + defaultValue = "10") + private int threadNo; + + @Option(names = {"-f", "--fail-at-end"}, + description = "If turned on, all the tasks will be executed even if " + + "there are failures.") + private boolean failAtEnd; + + @Option(names = {"-p", "--prefix"}, + description = "Unique identifier of the test execution. Usually used as" + + " a prefix of the generated object names. If empty, a random name" + + " will be generated", + defaultValue = "") + private String prefix = ""; + + private MetricRegistry metrics = new MetricRegistry(); + + private ExecutorService executor; + + private AtomicLong successCounter; Review comment: Yes, thanks to point me this bug earlier IRL. I started to use the for loop variable. As we creating tasks I think we can use it (simple long instead of an atomic one) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bshashikant commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach…
bshashikant commented on issue #1363: HDDS-1783 : Latency metric for applyTransaction in ContainerStateMach… URL: https://github.com/apache/hadoop/pull/1363#issuecomment-526122511 Thanks @avijayanhwx for updating. Can we also add some tests for the added metric in TestCSMMetrics ? Sorry for not mentioning it in the earlier review This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket
hadoop-yetus commented on issue #1277: HDDS-1054. List Multipart uploads in a bucket URL: https://github.com/apache/hadoop/pull/1277#issuecomment-52616 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 76 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 14 | Maven dependency ordering for branch | | +1 | mvninstall | 616 | trunk passed | | +1 | compile | 386 | trunk passed | | +1 | checkstyle | 69 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 926 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | trunk passed | | 0 | spotbugs | 471 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 709 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 21 | Maven dependency ordering for patch | | +1 | mvninstall | 610 | the patch passed | | +1 | compile | 409 | the patch passed | | +1 | cc | 409 | the patch passed | | +1 | javac | 409 | the patch passed | | +1 | checkstyle | 95 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 808 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 198 | the patch passed | | +1 | findbugs | 685 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 374 | hadoop-hdds in the patch passed. | | -1 | unit | 262 | hadoop-ozone in the patch failed. | | +1 | asflicense | 40 | The patch does not generate ASF License warnings. | | | | 6658 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.om.TestKeyManagerUnit | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1277 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux fc03450f9b59 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/testReport/ | | Max. process+thread count | 1237 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/common hadoop-ozone/client hadoop-ozone/ozone-manager hadoop-ozone/s3gateway hadoop-ozone/dist U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1277/8/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on a change in pull request #1341: HDDS-2022. Add additional freon tests
elek commented on a change in pull request #1341: HDDS-2022. Add additional freon tests URL: https://github.com/apache/hadoop/pull/1341#discussion_r318993155 ## File path: hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java ## @@ -0,0 +1,109 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.hadoop.ozone.freon; + +import java.io.InputStream; +import java.security.MessageDigest; +import java.util.concurrent.Callable; + +import org.apache.hadoop.hdds.cli.HddsVersionProvider; +import org.apache.hadoop.hdds.conf.OzoneConfiguration; +import org.apache.hadoop.ozone.client.OzoneBucket; +import org.apache.hadoop.ozone.client.OzoneClient; +import org.apache.hadoop.ozone.client.OzoneClientFactory; + +import com.codahale.metrics.Timer; +import org.apache.commons.io.IOUtils; +import picocli.CommandLine.Command; +import picocli.CommandLine.Option; + +/** + * Data generator tool test om performance. + */ +@Command(name = "ocokr", +aliases = "ozone-client-one-key-reader", +description = "Read the same key from multiple threads.", +versionProvider = HddsVersionProvider.class, +mixinStandardHelpOptions = true, +showDefaultValues = true) +public class SameKeyReader extends BaseFreonGenerator +implements Callable { + + @Option(names = {"-v", "--volume"}, + description = "Name of the bucket which contains the test data. Will be" + + " created if missing.", + defaultValue = "vol1") + private String volumeName; + + @Option(names = {"-b", "--bucket"}, + description = "Name of the bucket which contains the test data. Will be" + + " created if missing.", + defaultValue = "bucket1") + private String bucketName; + + @Option(names = {"-k", "--key"}, + description = "Name of the key read from multiple threads", + defaultValue = "bucket1") Review comment: What is the problem with bucket1? I think the power users can configure it but I would prefer to make it as easy as possible for the first-time users. `bucket1` seems to be as good as anything else for me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.
hadoop-yetus commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM. URL: https://github.com/apache/hadoop/pull/1225#issuecomment-526117907 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 69 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 24 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 72 | Maven dependency ordering for branch | | +1 | mvninstall | 593 | trunk passed | | +1 | compile | 376 | trunk passed | | +1 | checkstyle | 80 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 852 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 172 | trunk passed | | 0 | spotbugs | 443 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 646 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 39 | Maven dependency ordering for patch | | +1 | mvninstall | 567 | the patch passed | | +1 | compile | 401 | the patch passed | | +1 | javac | 401 | the patch passed | | +1 | checkstyle | 87 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | shellcheck | 0 | There were no new shellcheck issues. | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 737 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 190 | the patch passed | | +1 | findbugs | 736 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 370 | hadoop-hdds in the patch passed. | | -1 | unit | 2299 | hadoop-ozone in the patch failed. | | +1 | asflicense | 52 | The patch does not generate ASF License warnings. | | | | 8617 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.TestSecureOzoneCluster | | | hadoop.ozone.TestOzoneConfigurationFields | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/23/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1225 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs | | uname | Linux 9edea6fda407 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 371c9eb | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/23/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/23/testReport/ | | Max. process+thread count | 4716 (vs. ulimit of 5500) | | modules | C: hadoop-hdds/common hadoop-ozone/dist hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/23/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1365: HDDS-1949. Missing or error-prone test cleanup
hadoop-yetus commented on issue #1365: HDDS-1949. Missing or error-prone test cleanup URL: https://github.com/apache/hadoop/pull/1365#issuecomment-526114632 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 48 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 9 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 554 | trunk passed | | +1 | compile | 381 | trunk passed | | +1 | checkstyle | 71 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 841 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 175 | trunk passed | | 0 | spotbugs | 437 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 640 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 549 | the patch passed | | +1 | compile | 399 | the patch passed | | +1 | javac | 399 | the patch passed | | +1 | checkstyle | 87 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 818 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 190 | the patch passed | | +1 | findbugs | 735 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 329 | hadoop-hdds in the patch passed. | | -1 | unit | 1646 | hadoop-ozone in the patch failed. | | +1 | asflicense | 44 | The patch does not generate ASF License warnings. | | | | 7697 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient | | | hadoop.ozone.client.rpc.TestContainerStateMachineFailures | | | hadoop.ozone.TestContainerOperations | | | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient | | | hadoop.ozone.client.rpc.TestDeleteWithSlowFollower | | | hadoop.ozone.TestOzoneConfigurationFields | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1365 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 06386c041082 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 371c9eb | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/4/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/4/testReport/ | | Max. process+thread count | 3721 (vs. ulimit of 5500) | | modules | C: hadoop-ozone/integration-test U: hadoop-ozone/integration-test | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1365/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525426300 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 605 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 828 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 568 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 1 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 662 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 110 | hadoop-hdds in the patch passed. | | +1 | unit | 292 | hadoop-ozone in the patch passed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 3377 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 3faebe560c6e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 8ab7020 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/5/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/5/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/5/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525457744 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 40 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 580 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 820 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 601 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 2 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 763 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 112 | hadoop-hdds in the patch passed. | | +1 | unit | 307 | hadoop-ozone in the patch passed. | | +1 | asflicense | 46 | The patch does not generate ASF License warnings. | | | | 3468 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 86d6e41dab67 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 66cfa48 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/10/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/10/testReport/ | | Max. process+thread count | 413 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/10/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525771887 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 166 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | -1 | mvninstall | 366 | hadoop-ozone in trunk failed. | | +1 | mvnsite | 1 | trunk passed | | -1 | shadedclient | 44 | branch has errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 65 | hadoop-ozone in the patch failed. | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 0 | The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 41 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | -1 | unit | 35 | hadoop-hdds in the patch failed. | | -1 | unit | 38 | hadoop-ozone in the patch failed. | | 0 | asflicense | 40 | ASF License check generated no output? | | | | 1218 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux dce892e56430 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 55cc115 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/artifact/out/branch-mvninstall-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/artifact/out/patch-mvninstall-hadoop-ozone.txt | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/artifact/out/diff-patch-shellcheck.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/testReport/ | | Max. process+thread count | 94 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/12/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525895154 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 205 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 1 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 807 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1046 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 699 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 2 | The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 873 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 128 | hadoop-hdds in the patch passed. | | +1 | unit | 333 | hadoop-ozone in the patch passed. | | +1 | asflicense | 68 | The patch does not generate ASF License warnings. | | | | 4408 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/14/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 79ed1d7da6fc 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 29bd6f3 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/14/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/14/testReport/ | | Max. process+thread count | 305 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/14/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-526105839 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 43 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | 0 | shelldocs | 1 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 573 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 817 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 545 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 0 | The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 703 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 116 | hadoop-hdds in the patch passed. | | +1 | unit | 284 | hadoop-ozone in the patch passed. | | +1 | asflicense | 53 | The patch does not generate ASF License warnings. | | | | 3370 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/16/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 325475ccba24 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / c749f62 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/16/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/16/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/16/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525450361 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 43 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 629 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 809 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 561 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 2 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 706 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 113 | hadoop-hdds in the patch passed. | | +1 | unit | 289 | hadoop-ozone in the patch passed. | | +1 | asflicense | 52 | The patch does not generate ASF License warnings. | | | | 3418 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 5310f5c457e6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 66cfa48 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/7/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/7/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/7/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525447935 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 70 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 682 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 807 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 549 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 0 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 697 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 112 | hadoop-hdds in the patch passed. | | +1 | unit | 287 | hadoop-ozone in the patch passed. | | +1 | asflicense | 50 | The patch does not generate ASF License warnings. | | | | 3466 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux f5ef4bdfe07e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 66cfa48 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/6/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/6/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/6/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525457148 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 111 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 788 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1062 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 716 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 1 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 873 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 121 | hadoop-hdds in the patch passed. | | +1 | unit | 327 | hadoop-ozone in the patch passed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 4280 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 7bde063a1180 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 66cfa48 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/8/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/8/testReport/ | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/8/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525891507 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 153 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 1 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 820 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1073 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 734 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 2 | The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 950 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 156 | hadoop-hdds in the patch passed. | | +1 | unit | 386 | hadoop-ozone in the patch passed. | | +1 | asflicense | 66 | The patch does not generate ASF License warnings. | | | | 4605 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 599ca0af3357 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 48cb583 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/13/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/13/testReport/ | | Max. process+thread count | 316 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/13/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525909710 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 142 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 795 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 1067 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 774 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 1 | The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 977 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 137 | hadoop-hdds in the patch passed. | | +1 | unit | 360 | hadoop-ozone in the patch passed. | | +1 | asflicense | 54 | The patch does not generate ASF License warnings. | | | | 4565 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux a82aa08240e3 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / aef6a4f | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/15/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/15/testReport/ | | Max. process+thread count | 309 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/15/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525454707 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 34 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | 0 | shelldocs | 1 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 606 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 846 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 555 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 0 | The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 756 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 101 | hadoop-hdds in the patch passed. | | +1 | unit | 284 | hadoop-ozone in the patch passed. | | +1 | asflicense | 43 | The patch does not generate ASF License warnings. | | | | 3414 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 89a6f21e8199 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 66cfa48 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/9/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/9/testReport/ | | Max. process+thread count | 305 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/9/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus removed a comment on issue #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#issuecomment-525767945 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 71 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | 0 | shelldocs | 0 | Shelldocs was not available. | | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 635 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 884 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 569 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | -1 | shellcheck | 0 | The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 757 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | unit | 103 | hadoop-hdds in the patch passed. | | +1 | unit | 290 | hadoop-ozone in the patch passed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 3549 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.0 Server=19.03.0 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1348 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 108ac3376790 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 55cc115 | | shellcheck | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/11/artifact/out/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/11/testReport/ | | Max. process+thread count | 333 (vs. ulimit of 5500) | | modules | C: hadoop-ozone U: hadoop-ozone | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1348/11/console | | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts
hadoop-yetus commented on a change in pull request #1348: HDDS-2030. Generate simplifed reports by the dev-support/checks/*.sh scripts URL: https://github.com/apache/hadoop/pull/1348#discussion_r318267710 ## File path: hadoop-ozone/dev-support/checks/_mvn_unit_report.sh ## @@ -0,0 +1,53 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +## generate summary txt file +find "." -name 'TEST*.xml' -print0 \ +| xargs -n1 -0 "grep" -l -E "> "$SUMMARY_FILE" +done +done +printf "\n\n" >> "$SUMMARY_FILE" +printf "# Failing tests: \n\n" | cat $SUMMARY_FILE > temp && mv temp "$SUMMARY_FILE" Review comment: shellcheck:38: note: Double quote to prevent globbing and word splitting. [SC2086] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org