[jira] [Assigned] (HADOOP-16854) ABFS: Tune the logic calculating max concurrent request count
[ https://issues.apache.org/jira/browse/HADOOP-16854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H reassigned HADOOP-16854: - Assignee: Bilahari T H (was: Sneha Vijayarajan) > ABFS: Tune the logic calculating max concurrent request count > - > > Key: HADOOP-16854 > URL: https://issues.apache.org/jira/browse/HADOOP-16854 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Bilahari T H >Priority: Major > > Currently in environments where memory is restricted, current max concurrent > request count logic will trigger a large number of buffers needed for the > execution to be blocked leading to out Of Memory exceptions. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move et
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#issuecomment-598022742 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 14s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 41s | trunk passed | | +1 :green_heart: | compile | 1m 7s | trunk passed | | +1 :green_heart: | checkstyle | 0m 48s | trunk passed | | +1 :green_heart: | mvnsite | 1m 13s | trunk passed | | +1 :green_heart: | shadedclient | 17m 33s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 41s | trunk passed | | +0 :ok: | spotbugs | 3m 2s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 7s | the patch passed | | +1 :green_heart: | compile | 1m 3s | the patch passed | | -1 :x: | javac | 1m 3s | hadoop-hdfs-project_hadoop-hdfs generated 6 new + 579 unchanged - 0 fixed = 585 total (was 579) | | -0 :warning: | checkstyle | 0m 44s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 339 unchanged - 0 fixed = 345 total (was 339) | | +1 :green_heart: | mvnsite | 1m 10s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 43s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 43s | hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100) | | +1 :green_heart: | findbugs | 3m 18s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 108m 2s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | The patch does not generate ASF License warnings. | | | | 181m 53s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1829 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 23bd4bf9dade 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 0b931f3 | | Default Java | 1.8.0_242 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/testReport/ | | Max. process+thread count | 2876 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/15/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc.
jojochuang commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#issuecomment-597983050 Test failures due to OOM, unrelated. Triggered a rebuild regardless This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move et
hadoop-yetus commented on issue #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#issuecomment-597979746 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 46s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 24m 12s | trunk passed | | +1 :green_heart: | compile | 1m 25s | trunk passed | | +1 :green_heart: | checkstyle | 1m 2s | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | trunk passed | | +1 :green_heart: | shadedclient | 20m 29s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 49s | trunk passed | | +0 :ok: | spotbugs | 3m 34s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 31s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 23s | the patch passed | | +1 :green_heart: | compile | 1m 13s | the patch passed | | -1 :x: | javac | 1m 13s | hadoop-hdfs-project_hadoop-hdfs generated 6 new + 579 unchanged - 0 fixed = 585 total (was 579) | | -0 :warning: | checkstyle | 0m 50s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 339 unchanged - 0 fixed = 345 total (was 339) | | +1 :green_heart: | mvnsite | 1m 15s | the patch passed | | +1 :green_heart: | whitespace | 0m 1s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 17m 16s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 41s | hadoop-hdfs-project_hadoop-hdfs generated 1 new + 100 unchanged - 0 fixed = 101 total (was 100) | | +1 :green_heart: | findbugs | 3m 30s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 110m 53s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | The patch does not generate ASF License warnings. | | | | 192m 47s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.TestWriteRead | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestWriteReadStripedFile | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1829 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6e3684b16c0f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 0b931f3 | | Default Java | 1.8.0_242 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1829/14/testReport/ | | Max. process+thread count | 4794 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-
[GitHub] [hadoop] brfrn169 commented on a change in pull request #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong
brfrn169 commented on a change in pull request #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong URL: https://github.com/apache/hadoop/pull/1889#discussion_r391371213 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/FakeTimer.java ## @@ -29,17 +29,19 @@ @InterfaceAudience.Private @InterfaceStability.Unstable public class FakeTimer extends Timer { + private long now; private long nowNanos; /** Constructs a FakeTimer with a non-zero value */ public FakeTimer() { // Initialize with a non-trivial value. +now = 157783680L; // 2020-01-01 00:00:00,000+ Review comment: Thank you for the comment. Actually, I think it's more real behavior than original one, because `Timer.now()` returns the time based on `System.currentTimeMillis()`, and `Timer.monotonicNow()` and `Timer.monotonicNowNanos()` return the time based on `System.nanoTime()`, which are different times. This change is useful to verify the fix in this PR. Also, it doesn't break the existing tests. What do you think? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmd
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#discussion_r391313005 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java ## @@ -0,0 +1,167 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.namenode; + +import org.apache.hadoop.ipc.CallerContext; +import org.apache.hadoop.security.UserGroupInformation; +import org.junit.Before; +import org.junit.Test; + +import java.io.IOException; + +import static junit.framework.TestCase.assertEquals; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestAuthorizationContext { + + private String fsOwner = "hdfs"; + private String superGroup = "hdfs"; + private UserGroupInformation ugi = UserGroupInformation. + createUserForTesting(fsOwner, new String[] {superGroup}); + + private INodeAttributes[] emptyINodeAttributes = new INodeAttributes[] {}; + private INodesInPath iip = mock(INodesInPath.class); + private int snapshotId = 0; + private INode[] inodes = new INode[] {}; + private byte[][] components = new byte[][] {}; + private String path = ""; + private int ancestorIndex = inodes.length - 2; + + @Before + public void setUp() throws IOException { +when(iip.getPathSnapshotId()).thenReturn(snapshotId); +when(iip.getINodesArray()).thenReturn(inodes); +when(iip.getPathComponents()).thenReturn(components); +when(iip.getPath()).thenReturn(path); + } + + @Test + public void testBuilder() { +String opType = "test"; +CallerContext.setCurrent(new CallerContext.Builder( +"TestAuthorizationContext").build()); + +INodeAttributeProvider.AuthorizationContext.Builder builder = +new INodeAttributeProvider.AuthorizationContext.Builder(); +builder.fsOwner(fsOwner). +supergroup(superGroup). +callerUgi(ugi). +inodeAttrs(emptyINodeAttributes). +inodes(inodes). +pathByNameArr(components). +snapshotId(snapshotId). +path(path). +ancestorIndex(ancestorIndex). +doCheckOwner(true). +ancestorAccess(null). +parentAccess(null). +access(null). +subAccess(null). +ignoreEmptyDir(true). +operationName(opType). +callerContext(CallerContext.getCurrent()); + +INodeAttributeProvider.AuthorizationContext authzContext = builder.build(); +assertEquals(authzContext.getFsOwner(), fsOwner); +assertEquals(authzContext.getSupergroup(), superGroup); +assertEquals(authzContext.getCallerUgi(), ugi); +assertEquals(authzContext.getInodeAttrs(), emptyINodeAttributes); +assertEquals(authzContext.getInodes(), inodes); +assertEquals(authzContext.getPathByNameArr(), components); +assertEquals(authzContext.getSnapshotId(), snapshotId); +assertEquals(authzContext.getPath(), path); +assertEquals(authzContext.getAncestorIndex(), ancestorIndex); +assertEquals(authzContext.getOperationName(), opType); +assertEquals(authzContext.getCallerContext(), CallerContext.getCurrent()); + } + + @Test + public void testLegacyAPI() throws IOException { +INodeAttributeProvider.AccessControlEnforcer +mockEnforcer = mock(INodeAttributeProvider.AccessControlEnforcer.class); +INodeAttributeProvider mockINodeAttributeProvider = +mock(INodeAttributeProvider.class); +when(mockINodeAttributeProvider.getExternalAccessControlEnforcer(any())). +thenReturn(mockEnforcer); + +FSPermissionChecker checker = new FSPermissionChecker( +fsOwner, superGroup, ugi, mockINodeAttributeProvider); Review comment: this is covered by existing tests when FSDirectory initializes a FSPermissionChecker, so this is good. This is an automated message from the Apache Git Service
[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmd
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#discussion_r391313005 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java ## @@ -0,0 +1,167 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.namenode; + +import org.apache.hadoop.ipc.CallerContext; +import org.apache.hadoop.security.UserGroupInformation; +import org.junit.Before; +import org.junit.Test; + +import java.io.IOException; + +import static junit.framework.TestCase.assertEquals; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestAuthorizationContext { + + private String fsOwner = "hdfs"; + private String superGroup = "hdfs"; + private UserGroupInformation ugi = UserGroupInformation. + createUserForTesting(fsOwner, new String[] {superGroup}); + + private INodeAttributes[] emptyINodeAttributes = new INodeAttributes[] {}; + private INodesInPath iip = mock(INodesInPath.class); + private int snapshotId = 0; + private INode[] inodes = new INode[] {}; + private byte[][] components = new byte[][] {}; + private String path = ""; + private int ancestorIndex = inodes.length - 2; + + @Before + public void setUp() throws IOException { +when(iip.getPathSnapshotId()).thenReturn(snapshotId); +when(iip.getINodesArray()).thenReturn(inodes); +when(iip.getPathComponents()).thenReturn(components); +when(iip.getPath()).thenReturn(path); + } + + @Test + public void testBuilder() { +String opType = "test"; +CallerContext.setCurrent(new CallerContext.Builder( +"TestAuthorizationContext").build()); + +INodeAttributeProvider.AuthorizationContext.Builder builder = +new INodeAttributeProvider.AuthorizationContext.Builder(); +builder.fsOwner(fsOwner). +supergroup(superGroup). +callerUgi(ugi). +inodeAttrs(emptyINodeAttributes). +inodes(inodes). +pathByNameArr(components). +snapshotId(snapshotId). +path(path). +ancestorIndex(ancestorIndex). +doCheckOwner(true). +ancestorAccess(null). +parentAccess(null). +access(null). +subAccess(null). +ignoreEmptyDir(true). +operationName(opType). +callerContext(CallerContext.getCurrent()); + +INodeAttributeProvider.AuthorizationContext authzContext = builder.build(); +assertEquals(authzContext.getFsOwner(), fsOwner); +assertEquals(authzContext.getSupergroup(), superGroup); +assertEquals(authzContext.getCallerUgi(), ugi); +assertEquals(authzContext.getInodeAttrs(), emptyINodeAttributes); +assertEquals(authzContext.getInodes(), inodes); +assertEquals(authzContext.getPathByNameArr(), components); +assertEquals(authzContext.getSnapshotId(), snapshotId); +assertEquals(authzContext.getPath(), path); +assertEquals(authzContext.getAncestorIndex(), ancestorIndex); +assertEquals(authzContext.getOperationName(), opType); +assertEquals(authzContext.getCallerContext(), CallerContext.getCurrent()); + } + + @Test + public void testLegacyAPI() throws IOException { +INodeAttributeProvider.AccessControlEnforcer +mockEnforcer = mock(INodeAttributeProvider.AccessControlEnforcer.class); +INodeAttributeProvider mockINodeAttributeProvider = +mock(INodeAttributeProvider.class); +when(mockINodeAttributeProvider.getExternalAccessControlEnforcer(any())). +thenReturn(mockEnforcer); + +FSPermissionChecker checker = new FSPermissionChecker( +fsOwner, superGroup, ugi, mockINodeAttributeProvider); Review comment: this is covered by existing tests when FSDirectory initializes a FSPermissionChecker. This is an automated message from the Apache Git Service. To respond to t
[GitHub] [hadoop] jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmd
jojochuang commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#discussion_r391311899 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1982,6 +1982,7 @@ void setPermission(String src, FsPermission permission) throws IOException { FileStatus auditStat; checkOperation(OperationCategory.WRITE); final FSPermissionChecker pc = getPermissionChecker(); +FSPermissionChecker.setOperationType(operationName); Review comment: Thanks @xiaoyuyao for the review. * FSDirSymlinkOp#createSymlinkInt() is an exception. It doesn't check permission in the FSNamesystem so missed this one. Added. * NameNodeAdapter#getFileInfo() is used only in tests. * NamenodeFsck#getBlockLocations() --> call it fsckGetBlockLocations to distinguish it from regular open operations. * FSNDNCache#addCacheDirective/removeCacheDirective/modifyCacheDirective/listCacheDirectives/listCachePools --> done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16890) ABFS: Change in expiry calculation for MSI token provider
[ https://issues.apache.org/jira/browse/HADOOP-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16890. - Fix Version/s: 3.3.0 Resolution: Fixed > ABFS: Change in expiry calculation for MSI token provider > - > > Key: HADOOP-16890 > URL: https://issues.apache.org/jira/browse/HADOOP-16890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > Fix For: 3.3.0 > > > Set token expiry time as the value of expires_on field from the MSI response > in case it is present -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16890) ABFS: Change in expiry calculation for MSI token provider
[ https://issues.apache.org/jira/browse/HADOOP-16890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16890: Affects Version/s: 3.3.0 > ABFS: Change in expiry calculation for MSI token provider > - > > Key: HADOOP-16890 > URL: https://issues.apache.org/jira/browse/HADOOP-16890 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > Fix For: 3.3.0 > > > Set token expiry time as the value of expires_on field from the MSI response > in case it is present -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
steveloughran merged pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
steveloughran commented on a change in pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#discussion_r391253178 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java ## @@ -408,17 +409,29 @@ private static AzureADToken parseTokenFromStream(InputStream httpResponseStream) if (fieldName.equals("access_token")) { token.setAccessToken(fieldValue); } + if (fieldName.equals("expires_in")) { -expiryPeriod = Integer.parseInt(fieldValue); +expiryPeriodInSecs = Integer.parseInt(fieldValue); + } + + if (fieldName.equals("expires_on")) { +expiresOnInSecs = Long.parseLong(fieldValue); } + } jp.nextToken(); } jp.close(); - long expiry = System.currentTimeMillis(); - expiry = expiry + expiryPeriod * 1000L; // convert expiryPeriod to milliseconds and add - token.setExpiry(new Date(expiry)); - LOG.debug("AADToken: fetched token with expiry " + token.getExpiry().toString()); + if (expiresOnInSecs > -1) { +token.setExpiry(new Date(expiresOnInSecs * 1000)); + } else { +long expiry = System.currentTimeMillis(); Review comment: it'd be good to have some test JSONs here for real-world responses This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
steveloughran commented on a change in pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#discussion_r390958751 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java ## @@ -258,8 +258,13 @@ public UnexpectedResponseException(final int httpErrorCode, } private static AzureADToken getTokenCall(String authEndpoint, String body, - Hashtable headers, String httpMethod) - throws IOException { + Hashtable headers, String httpMethod) throws IOException { +return getTokenCall(authEndpoint, body, headers, httpMethod, false); + } + + private static AzureADToken getTokenCall(String authEndpoint, String body, + Hashtable headers, String httpMethod, boolean isMsi) Review comment: Not something needing changing in this PR, but this should really be Map<> and the code to move to a HashMap; all of Hashtable's methods are synchronized and it underperforms. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
steveloughran commented on a change in pull request #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#discussion_r391253397 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/MsiTokenProvider.java ## @@ -36,6 +36,10 @@ private final String clientId; + private long tokenFetchTime = -1; + + private static final long ONE_HOUR = 3600 * 1000; Review comment: better: 3600_1000 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16895) [thirdparty] Revisit LICENSEs and NOTICEs
[ https://issues.apache.org/jira/browse/HADOOP-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057363#comment-17057363 ] Hudson commented on HADOOP-16895: - SUCCESS: Integrated in Jenkins build Hadoop-thirdparty-trunk-commit #9 (See [https://builds.apache.org/job/Hadoop-thirdparty-trunk-commit/9/]) HADOOP-16895. [thirdparty] Revisit LICENSEs and NOTICEs (#6) addendum to (vinayakumarb: rev 921375665f5b5937e0bd3f1e588a7996777b26d3) * (edit) pom.xml > [thirdparty] Revisit LICENSEs and NOTICEs > - > > Key: HADOOP-16895 > URL: https://issues.apache.org/jira/browse/HADOOP-16895 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Fix For: thirdparty-1.0.0 > > > LICENSE.txt and NOTICE.txt have many entries which are unrelated to > thirdparty, > Revisit and cleanup such entries. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16819) Possible inconsistent state of AbstractDelegationTokenSecretManager
[ https://issues.apache.org/jira/browse/HADOOP-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057364#comment-17057364 ] Steve Loughran commented on HADOOP-16819: - # still waiting for that github PR # as this goes near code I don't understand, we will need to chase down other reviewers; [~omalley] springs to mind > Possible inconsistent state of AbstractDelegationTokenSecretManager > --- > > Key: HADOOP-16819 > URL: https://issues.apache.org/jira/browse/HADOOP-16819 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, security >Affects Versions: 3.3.0 >Reporter: Hankó Gergely >Assignee: Hankó Gergely >Priority: Major > Attachments: HADOOP-16819.001.patch > > > [AbstractDelegationTokenSecretManager.updateCurrentKey|https://github.com/apache/hadoop/blob/581072a8f04f7568d3560f105fd1988d3acc9e54/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java#L360] > increments the current key id and creates the new delegation key in two > distinct synchronized blocks. > This means that other threads can see the class in an *inconsistent state, > where the key for the current key id doesn't exist (yet)*. > For example the following method sometimes returns null when the token > remover thread is between the two synchronized blocks: > {noformat} > @Override > public DelegationKey getCurrentKey() { > return getDelegationKey(getCurrentKeyId()); > }{noformat} > > Also it is possible that updateCurrentKey is called from multiple threads at > the same time so *distinct keys can be generated with the same key id*. > > This issue is suspected to be the cause of the intermittent failure of > [TestLlapSignerImpl.testSigning|https://github.com/apache/hive/blob/3c0705eaf5121c7b61f2dbe9db9545c3926f26f1/llap-server/src/test/org/apache/hadoop/hive/llap/security/TestLlapSignerImpl.java#L195] > - HIVE-22621. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16919) [thirdparty] Handle release package related issues
[ https://issues.apache.org/jira/browse/HADOOP-16919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B resolved HADOOP-16919. Fix Version/s: thirdparty-1.0.0 Hadoop Flags: Reviewed Resolution: Fixed Merged to trunk,branch-1.0 of hadoop-thirdparty. Thanks [~ayushtkn] for reviews > [thirdparty] Handle release package related issues > -- > > Key: HADOOP-16919 > URL: https://issues.apache.org/jira/browse/HADOOP-16919 > Project: Hadoop Common > Issue Type: Bug > Components: hadoop-thirdparty >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Fix For: thirdparty-1.0.0 > > > Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread > here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]] > {quote}3. Yetus seems to be included in the source package. I am not sure if > it's intentional but I would remove the patchprocess directory from the > tar file. > 7. Minor nit: I would suggest to use only the filename in the sha512 > files (instead of having the /build/source/target prefix). It would help > to use `sha512 -c` command to validate the checksum. > {quote} > Also, update available artifacts in docs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16895) [thirdparty] Revisit LICENSEs and NOTICEs
[ https://issues.apache.org/jira/browse/HADOOP-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B resolved HADOOP-16895. Fix Version/s: thirdparty-1.0.0 Hadoop Flags: Reviewed Resolution: Fixed Committed to brach-1.0, trunk of hadoop-thirdparty Thanks [~aajisaka] and [~elek] for reviews. > [thirdparty] Revisit LICENSEs and NOTICEs > - > > Key: HADOOP-16895 > URL: https://issues.apache.org/jira/browse/HADOOP-16895 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > Fix For: thirdparty-1.0.0 > > > LICENSE.txt and NOTICE.txt have many entries which are unrelated to > thirdparty, > Revisit and cleanup such entries. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdi
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#discussion_r391187769 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuthorizationContext.java ## @@ -0,0 +1,167 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.namenode; + +import org.apache.hadoop.ipc.CallerContext; +import org.apache.hadoop.security.UserGroupInformation; +import org.junit.Before; +import org.junit.Test; + +import java.io.IOException; + +import static junit.framework.TestCase.assertEquals; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +public class TestAuthorizationContext { + + private String fsOwner = "hdfs"; + private String superGroup = "hdfs"; + private UserGroupInformation ugi = UserGroupInformation. + createUserForTesting(fsOwner, new String[] {superGroup}); + + private INodeAttributes[] emptyINodeAttributes = new INodeAttributes[] {}; + private INodesInPath iip = mock(INodesInPath.class); + private int snapshotId = 0; + private INode[] inodes = new INode[] {}; + private byte[][] components = new byte[][] {}; + private String path = ""; + private int ancestorIndex = inodes.length - 2; + + @Before + public void setUp() throws IOException { +when(iip.getPathSnapshotId()).thenReturn(snapshotId); +when(iip.getINodesArray()).thenReturn(inodes); +when(iip.getPathComponents()).thenReturn(components); +when(iip.getPath()).thenReturn(path); + } + + @Test + public void testBuilder() { +String opType = "test"; +CallerContext.setCurrent(new CallerContext.Builder( +"TestAuthorizationContext").build()); + +INodeAttributeProvider.AuthorizationContext.Builder builder = +new INodeAttributeProvider.AuthorizationContext.Builder(); +builder.fsOwner(fsOwner). +supergroup(superGroup). +callerUgi(ugi). +inodeAttrs(emptyINodeAttributes). +inodes(inodes). +pathByNameArr(components). +snapshotId(snapshotId). +path(path). +ancestorIndex(ancestorIndex). +doCheckOwner(true). +ancestorAccess(null). +parentAccess(null). +access(null). +subAccess(null). +ignoreEmptyDir(true). +operationName(opType). +callerContext(CallerContext.getCurrent()); + +INodeAttributeProvider.AuthorizationContext authzContext = builder.build(); +assertEquals(authzContext.getFsOwner(), fsOwner); +assertEquals(authzContext.getSupergroup(), superGroup); +assertEquals(authzContext.getCallerUgi(), ugi); +assertEquals(authzContext.getInodeAttrs(), emptyINodeAttributes); +assertEquals(authzContext.getInodes(), inodes); +assertEquals(authzContext.getPathByNameArr(), components); +assertEquals(authzContext.getSnapshotId(), snapshotId); +assertEquals(authzContext.getPath(), path); +assertEquals(authzContext.getAncestorIndex(), ancestorIndex); +assertEquals(authzContext.getOperationName(), opType); +assertEquals(authzContext.getCallerContext(), CallerContext.getCurrent()); + } + + @Test + public void testLegacyAPI() throws IOException { +INodeAttributeProvider.AccessControlEnforcer +mockEnforcer = mock(INodeAttributeProvider.AccessControlEnforcer.class); +INodeAttributeProvider mockINodeAttributeProvider = +mock(INodeAttributeProvider.class); +when(mockINodeAttributeProvider.getExternalAccessControlEnforcer(any())). +thenReturn(mockEnforcer); + +FSPermissionChecker checker = new FSPermissionChecker( +fsOwner, superGroup, ugi, mockINodeAttributeProvider); Review comment: NIT: do we have a test case when the attributeProvider=null? This is an automated message from the Apache Git Service. To respond to the message, please log on
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdi
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#discussion_r391179551 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java ## @@ -68,6 +391,16 @@ public abstract void checkPermission(String fsOwner, String supergroup, boolean ignoreEmptyDir) throws AccessControlException; +/** + * Checks permission on a file system object. Has to throw an Exception + * if the filesystem object is not accessessible by the calling Ugi. Review comment: NIT: typo: accessessible This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdi
xiaoyuyao commented on a change in pull request #1829: HDFS-14743. Enhance INodeAttributeProvider/ AccessControlEnforcer Interface in HDFS to support Authorization of mkdir, rm, rmdir, copy, move etc... URL: https://github.com/apache/hadoop/pull/1829#discussion_r391173121 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1982,6 +1982,7 @@ void setPermission(String src, FsPermission permission) throws IOException { FileStatus auditStat; checkOperation(OperationCategory.WRITE); final FSPermissionChecker pc = getPermissionChecker(); +FSPermissionChecker.setOperationType(operationName); Review comment: There are other places that need to be patched with setOperationType After HDFS-7416 refactor, not all permission check is done in FSN. Here is the list of missed ones: FSDirSymlinkOp#createSymlinkInt() NameNodeAdapter#getFileInfo() NamenodeFsck#getBlockLocations() FSNDNCache#addCacheDirective/removeCacheDirective/modifyCacheDirective/listCacheDirectives/listCachePools This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16776) backport HADOOP-16775: distcp copies to s3 are randomly corrupted
[ https://issues.apache.org/jira/browse/HADOOP-16776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-16776: - Resolution: Won't Fix Status: Resolved (was: Patch Available) Branch-2.8 is EOL. Resolve as Won't Fix. > backport HADOOP-16775: distcp copies to s3 are randomly corrupted > - > > Key: HADOOP-16776 > URL: https://issues.apache.org/jira/browse/HADOOP-16776 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.8.0, 3.0.0, 2.10.0 >Reporter: Amir Shenavandeh >Priority: Blocker > Labels: DistCp > Attachments: HADOOP-16776-branch-2.8-001.patch, > HADOOP-16776-branch-2.8-002.patch > > > This is to back port HADOOP-16775 to hadoop 2.8 branch. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14866) Backport implementation of parallel block copy in Distcp to hadoop 2.8
[ https://issues.apache.org/jira/browse/HADOOP-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-14866: - Resolution: Won't Fix Status: Resolved (was: Patch Available) Branch-2.8 is EOL. Resolve as Won't Fix. > Backport implementation of parallel block copy in Distcp to hadoop 2.8 > -- > > Key: HADOOP-14866 > URL: https://issues.apache.org/jira/browse/HADOOP-14866 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Reporter: Huafeng Wang >Assignee: Huafeng Wang >Priority: Major > Attachments: HADOOP-14866.001.branch-2.8.patch > > > The implementation of parallel block copy in Distcp targets to version 2.9. > It would be great to have this feature in version 2.8. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16814) Add dropped connections metric for Server
[ https://issues.apache.org/jira/browse/HADOOP-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057257#comment-17057257 ] Wei-Chiu Chuang commented on HADOOP-16814: -- Ping. Would you like to fix the UI and offer patch? Thanks! > Add dropped connections metric for Server > - > > Key: HADOOP-16814 > URL: https://issues.apache.org/jira/browse/HADOOP-16814 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.0 >Reporter: Fei Hui >Assignee: Fei Hui >Priority: Minor > Attachments: HADOOP-16814.001.patch > > > With this metric we can see that the number of handled rpcs which weren't > sent to clients. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9700) Snapshot support for distcp
[ https://issues.apache.org/jira/browse/HADOOP-9700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-9700: Resolution: Duplicate Status: Resolved (was: Patch Available) > Snapshot support for distcp > --- > > Key: HADOOP-9700 > URL: https://issues.apache.org/jira/browse/HADOOP-9700 > Project: Hadoop Common > Issue Type: New Feature > Components: tools/distcp >Reporter: Binglin Chang >Assignee: Binglin Chang >Priority: Major > Labels: BB2015-05-TBR > Attachments: HADOOP-9700-demo.patch > > > Add snapshot incremental copy ability to distcp, so we can do iterative > consistent backup between hadoop clusters. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16822) Provide source artifacts for hadoop-client-api
[ https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057240#comment-17057240 ] Hadoop QA commented on HADOOP-16822: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 35m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 19s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s{color} | {color:green} hadoop-client-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.7 Server=19.03.7 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | HADOOP-16822 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12991435/HADOOP-16822-hadoop-client-api-source-jar.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux eaaa171ff11f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cf9cf83 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16790/testReport/ | | Max. process+thread count | 312 (vs. ulimit of 5500) | | modules | C: hadoop-client-modules/hadoop-client-api U: hadoop-client-modules/hadoop-client-api | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16790/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Provide source artifacts for hadoop-client-api > -- > > Key: HADOOP-16822 > URL: https://issues.apache.org/jira/browse/HADOOP-16822 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Karel Kolman >Assignee: Karel Kolman >Priority: Major > Attachment
[jira] [Updated] (HADOOP-11716) Bump netty version to 4.1
[ https://issues.apache.org/jira/browse/HADOOP-11716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-11716: - Resolution: Duplicate Status: Resolved (was: Patch Available) > Bump netty version to 4.1 > - > > Key: HADOOP-11716 > URL: https://issues.apache.org/jira/browse/HADOOP-11716 > Project: Hadoop Common > Issue Type: Bug >Reporter: Haohui Mai >Assignee: Haohui Mai >Priority: Major > Labels: BB2015-05-TBR > Attachments: HADOOP-11716.000.patch, HADOOP-11716.001.patch, > HADOOP-11716.002.patch, HADOOP-11716.003.patch > > > This jira proposes to bump the netty version from 4.0 to 4.1 so that it is > possible to leverage the HTTP/2 support from netty. > Note that this is a compatible change: the dependency of netty 4.0 is > introduced during the 2.7 timeframe and no release has been made during the > time. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16822) Provide source artifacts for hadoop-client-api
[ https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057234#comment-17057234 ] Sean Busbey commented on HADOOP-16822: -- these should only end up in the nexus repo right? If that's the case I think adding source jars would be nice if it works. > Provide source artifacts for hadoop-client-api > -- > > Key: HADOOP-16822 > URL: https://issues.apache.org/jira/browse/HADOOP-16822 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Karel Kolman >Assignee: Karel Kolman >Priority: Major > Attachments: HADOOP-16822-hadoop-client-api-source-jar.patch > > > h5. Improvement request > The third-party libraries shading hadoop-client-api (& hadoop-client-runtime) > artifacts are super useful. > > Having uber source jar for hadoop-client-api (maybe even > hadoop-client-runtime) would be great for downstream development & debugging > purposes. > Are there any obstacles or objections against providing fat jar with all the > hadoop client api as well ? > h5. Dev links > - *maven-shaded-plugin* and its *shadeSourcesContent* attribute > - > https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#shadeSourcesContent -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong
goiri commented on a change in pull request #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong URL: https://github.com/apache/hadoop/pull/1889#discussion_r391119110 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/FakeTimer.java ## @@ -29,17 +29,19 @@ @InterfaceAudience.Private @InterfaceStability.Unstable public class FakeTimer extends Timer { + private long now; private long nowNanos; /** Constructs a FakeTimer with a non-zero value */ public FakeTimer() { // Initialize with a non-trivial value. +now = 157783680L; // 2020-01-01 00:00:00,000+ Review comment: Can we leave the old behavior as it was and add a method to change the nowNanos so we only set it there? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16822) Provide source artifacts for hadoop-client-api
[ https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057178#comment-17057178 ] Wei-Chiu Chuang commented on HADOOP-16822: -- Thanks [~karel.kolman] I think the only concern I have is the size of the generated artifacts. At some point we had an issue where the generated tarball was ~400mb or more. Not sure how much this is going to be or if this is going to be included in the tarball. [~busbey] any thoughts with regard to shaded artifacts? > Provide source artifacts for hadoop-client-api > -- > > Key: HADOOP-16822 > URL: https://issues.apache.org/jira/browse/HADOOP-16822 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Karel Kolman >Assignee: Karel Kolman >Priority: Major > Attachments: HADOOP-16822-hadoop-client-api-source-jar.patch > > > h5. Improvement request > The third-party libraries shading hadoop-client-api (& hadoop-client-runtime) > artifacts are super useful. > > Having uber source jar for hadoop-client-api (maybe even > hadoop-client-runtime) would be great for downstream development & debugging > purposes. > Are there any obstacles or objections against providing fat jar with all the > hadoop client api as well ? > h5. Dev links > - *maven-shaded-plugin* and its *shadeSourcesContent* attribute > - > https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#shadeSourcesContent -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16822) Provide source artifacts for hadoop-client-api
[ https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-16822: Assignee: Karel Kolman > Provide source artifacts for hadoop-client-api > -- > > Key: HADOOP-16822 > URL: https://issues.apache.org/jira/browse/HADOOP-16822 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Karel Kolman >Assignee: Karel Kolman >Priority: Major > Attachments: HADOOP-16822-hadoop-client-api-source-jar.patch > > > h5. Improvement request > The third-party libraries shading hadoop-client-api (& hadoop-client-runtime) > artifacts are super useful. > > Having uber source jar for hadoop-client-api (maybe even > hadoop-client-runtime) would be great for downstream development & debugging > purposes. > Are there any obstacles or objections against providing fat jar with all the > hadoop client api as well ? > h5. Dev links > - *maven-shaded-plugin* and its *shadeSourcesContent* attribute > - > https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#shadeSourcesContent -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16920) ABFS: Make list page size configurable
[ https://issues.apache.org/jira/browse/HADOOP-16920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057133#comment-17057133 ] Bilahari T H commented on HADOOP-16920: --- Driver test results using accounts in Central India Account with HNS Support {noformat} [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 416, Failures: 0, Errors: 0, Skipped: 36 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24{noformat} Account without HNS support {noformat} [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 416, Failures: 0, Errors: 0, Skipped: 226 [WARNING] Tests run: 194, Failures: 0, Errors: 0, Skipped: 24{noformat} > ABFS: Make list page size configurable > -- > > Key: HADOOP-16920 > URL: https://issues.apache.org/jira/browse/HADOOP-16920 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Priority: Minor > > Make list page size configurable -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16685) FileSystem#listStatusIterator does not check if given path exists
[ https://issues.apache.org/jira/browse/HADOOP-16685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17056996#comment-17056996 ] Brahma Reddy Battula commented on HADOOP-16685: --- This will not cause any incompatiable issue..? > FileSystem#listStatusIterator does not check if given path exists > - > > Key: HADOOP-16685 > URL: https://issues.apache.org/jira/browse/HADOOP-16685 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.3.0, 3.2.2 > > > The Javadocs of FileSystem#listStatusIterator(final Path p) state that it > "@throws FileNotFoundException if p does not exist". However, > that does not seem to be the case. The method simply creates a > DirListingIterator which doesn't do an existence check. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong
hadoop-yetus commented on issue #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong URL: https://github.com/apache/hadoop/pull/1889#issuecomment-597622324 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 4s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 55s | trunk passed | | +1 :green_heart: | compile | 17m 56s | trunk passed | | +1 :green_heart: | checkstyle | 2m 45s | trunk passed | | +1 :green_heart: | mvnsite | 2m 44s | trunk passed | | +1 :green_heart: | shadedclient | 22m 21s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 43s | trunk passed | | +0 :ok: | spotbugs | 3m 7s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 11s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 59s | the patch passed | | +1 :green_heart: | compile | 17m 7s | the patch passed | | +1 :green_heart: | javac | 17m 7s | the patch passed | | +1 :green_heart: | checkstyle | 2m 46s | the patch passed | | +1 :green_heart: | mvnsite | 2m 46s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 7s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 48s | the patch passed | | +1 :green_heart: | findbugs | 6m 1s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 50s | hadoop-common in the patch passed. | | -1 :x: | unit | 112m 13s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | The patch does not generate ASF License warnings. | | | | 250m 22s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1889 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 085133adf7b0 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cf9cf83 | | Default Java | 1.8.0_242 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/3/testReport/ | | Max. process+thread count | 3238 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#issuecomment-594902805 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 27s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 52s | trunk passed | | +1 :green_heart: | compile | 23m 0s | trunk passed | | +1 :green_heart: | checkstyle | 3m 22s | trunk passed | | +1 :green_heart: | mvnsite | 1m 15s | trunk passed | | +1 :green_heart: | shadedclient | 23m 18s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 59s | trunk passed | | +0 :ok: | spotbugs | 1m 14s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 27s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | -0 :warning: | patch | 1m 36s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 44s | the patch passed | | +1 :green_heart: | compile | 20m 31s | the patch passed | | +1 :green_heart: | javac | 20m 31s | the patch passed | | -0 :warning: | checkstyle | 3m 26s | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 1m 11s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 16m 53s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 54s | the patch passed | | +0 :ok: | findbugs | 0m 29s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 23s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 22s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | The patch does not generate ASF License warnings. | | | | 131m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1872 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 64625decbc8e 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 3afd4cb | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/5/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/5/testReport/ | | Max. process+thread count | 309 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#issuecomment-594120524 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 35m 13s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 25m 54s | trunk passed | | +1 :green_heart: | compile | 0m 37s | trunk passed | | +1 :green_heart: | checkstyle | 0m 26s | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | trunk passed | | +1 :green_heart: | shadedclient | 20m 41s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 32s | trunk passed | | +0 :ok: | spotbugs | 1m 9s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 7s | trunk passed | | -0 :warning: | patch | 1m 29s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | -0 :warning: | checkstyle | 0m 19s | hadoop-tools/hadoop-azure: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 32s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 18m 50s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed | | +1 :green_heart: | findbugs | 1m 7s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 37s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | The patch does not generate ASF License warnings. | | | | 111m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1872 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 296de404716a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d0a7c79 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/3/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#issuecomment-594653639 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 14s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 10s | trunk passed | | +1 :green_heart: | compile | 17m 18s | trunk passed | | +1 :green_heart: | checkstyle | 2m 38s | trunk passed | | +1 :green_heart: | mvnsite | 1m 20s | trunk passed | | +1 :green_heart: | shadedclient | 19m 13s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 12s | trunk passed | | +0 :ok: | spotbugs | 1m 3s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 34s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | -0 :warning: | patch | 1m 29s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 42s | the patch passed | | +1 :green_heart: | compile | 16m 9s | the patch passed | | +1 :green_heart: | javac | 16m 9s | the patch passed | | -0 :warning: | checkstyle | 2m 42s | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 1m 22s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 13s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 14s | the patch passed | | +0 :ok: | findbugs | 0m 31s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 32s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 37s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | The patch does not generate ASF License warnings. | | | | 109m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1872 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 5a028e078956 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / bbd704b | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/4/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/4/testReport/ | | Max. process+thread count | 456 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#issuecomment-593699941 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 18m 46s | trunk passed | | +1 :green_heart: | compile | 17m 0s | trunk passed | | +1 :green_heart: | checkstyle | 2m 37s | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | trunk passed | | +1 :green_heart: | shadedclient | 19m 40s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 13s | trunk passed | | +0 :ok: | spotbugs | 1m 3s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 33s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 41s | the patch passed | | +1 :green_heart: | compile | 16m 14s | the patch passed | | +1 :green_heart: | javac | 16m 14s | the patch passed | | -0 :warning: | checkstyle | 2m 36s | root: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 1m 19s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 15s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 16s | the patch passed | | +0 :ok: | findbugs | 0m 30s | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 25s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 1m 33s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | The patch does not generate ASF License warnings. | | | | 106m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1872 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 9795022f8637 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/testReport/ | | Max. process+thread count | 452 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-azure U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider
hadoop-yetus removed a comment on issue #1872: Hadoop 16890: Change in expiry calculation for MSI token provider URL: https://github.com/apache/hadoop/pull/1872#issuecomment-593691330 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 12s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 18s | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | trunk passed | | +1 :green_heart: | shadedclient | 15m 57s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | trunk passed | | +0 :ok: | spotbugs | 0m 49s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 48s | trunk passed | | -0 :warning: | patch | 1m 5s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | the patch passed | | +1 :green_heart: | compile | 0m 21s | the patch passed | | +1 :green_heart: | javac | 0m 21s | the patch passed | | -0 :warning: | checkstyle | 0m 14s | hadoop-tools/hadoop-azure: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | mvnsite | 0m 24s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 17s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 19s | the patch passed | | +1 :green_heart: | findbugs | 0m 52s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 61m 14s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.6 Server=19.03.6 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1872 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 6e61497f53c4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / edc2e9d | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1872/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-16914) Adding Output Stream Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-16914 started by Mehakmeet Singh. > Adding Output Stream Counters in ABFS > - > > Key: HADOOP-16914 > URL: https://issues.apache.org/jira/browse/HADOOP-16914 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > > AbfsOutputStream does not have any counters that can be populated or referred > to when needed for finding bottlenecks in that area. > purpose: > * Create an interface and Implementation class for all the AbfsOutputStream > counters. > * populate the counters in AbfsOutputStream in appropriate places. > * Override the toString() to see counters in logs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-16910) Adding FileSystem Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-16910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-16910 started by Mehakmeet Singh. > Adding FileSystem Counters in ABFS > -- > > Key: HADOOP-16910 > URL: https://issues.apache.org/jira/browse/HADOOP-16910 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > > Abfs FileSystem Counters are not populated and hence not shown on the console > side. > purpose: > * Passing Statistics in AbfsOutputStream and populating FileSystem > Counter(Write_ops) there. > * Populating Read_ops in AbfsInputStream. > * Showing Bytes_written, Bytes_read, Write_ops and Read_ops on the console > while using ABFS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16910) Adding FileSystem Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-16910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mehakmeet Singh reassigned HADOOP-16910: Assignee: Mehakmeet Singh > Adding FileSystem Counters in ABFS > -- > > Key: HADOOP-16910 > URL: https://issues.apache.org/jira/browse/HADOOP-16910 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > > Abfs FileSystem Counters are not populated and hence not shown on the console > side. > purpose: > * Passing Statistics in AbfsOutputStream and populating FileSystem > Counter(Write_ops) there. > * Populating Read_ops in AbfsInputStream. > * Showing Bytes_written, Bytes_read, Write_ops and Read_ops on the console > while using ABFS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16914) Adding Output Stream Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mehakmeet Singh reassigned HADOOP-16914: Assignee: Mehakmeet Singh > Adding Output Stream Counters in ABFS > - > > Key: HADOOP-16914 > URL: https://issues.apache.org/jira/browse/HADOOP-16914 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > > AbfsOutputStream does not have any counters that can be populated or referred > to when needed for finding bottlenecks in that area. > purpose: > * Create an interface and Implementation class for all the AbfsOutputStream > counters. > * populate the counters in AbfsOutputStream in appropriate places. > * Override the toString() to see counters in logs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1893: HADOOP-16920 ABFS: Make list page size configurable
hadoop-yetus commented on issue #1893: HADOOP-16920 ABFS: Make list page size configurable URL: https://github.com/apache/hadoop/pull/1893#issuecomment-597605373 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 17s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 41s | trunk passed | | +1 :green_heart: | compile | 0m 28s | trunk passed | | +1 :green_heart: | checkstyle | 0m 23s | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | trunk passed | | +1 :green_heart: | shadedclient | 16m 21s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 56s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 53s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 22s | the patch passed | | +1 :green_heart: | javac | 0m 22s | the patch passed | | -0 :warning: | checkstyle | 0m 16s | hadoop-tools/hadoop-azure: The patch generated 3 new + 2 unchanged - 0 fixed = 5 total (was 2) | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 21s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | the patch passed | | +1 :green_heart: | findbugs | 0m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 64m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1893 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle | | uname | Linux 81c666ac22a5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cf9cf83 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/1/testReport/ | | Max. process+thread count | 301 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1893/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16919) [thirdparty] Handle release package related issues
[ https://issues.apache.org/jira/browse/HADOOP-16919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17056908#comment-17056908 ] Hudson commented on HADOOP-16919: - SUCCESS: Integrated in Jenkins build Hadoop-thirdparty-trunk-commit #8 (See [https://builds.apache.org/job/Hadoop-thirdparty-trunk-commit/8/]) HADOOP-16919. Handle release packaging issues (#7) (vinayakumarb: rev 19948e6a98c562ce79be6e2783d51a8d7be110a5) * (edit) dev-support/bin/create-release * (edit) src/site/markdown/index.md.vm * (edit) src/main/resources/assemblies/hadoop-thirdparty-src.xml > [thirdparty] Handle release package related issues > -- > > Key: HADOOP-16919 > URL: https://issues.apache.org/jira/browse/HADOOP-16919 > Project: Hadoop Common > Issue Type: Bug > Components: hadoop-thirdparty >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > > Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread > here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]] > {quote}3. Yetus seems to be included in the source package. I am not sure if > it's intentional but I would remove the patchprocess directory from the > tar file. > 7. Minor nit: I would suggest to use only the filename in the sha512 > files (instead of having the /build/source/target prefix). It would help > to use `sha512 -c` command to validate the checksum. > {quote} > Also, update available artifacts in docs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16921) NPE in s3a byte buffer block upload
Steve Loughran created HADOOP-16921: --- Summary: NPE in s3a byte buffer block upload Key: HADOOP-16921 URL: https://issues.apache.org/jira/browse/HADOOP-16921 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.0 Reporter: Steve Loughran NPE in s3a upload when fs.s3a.fast.upload.buffer = bytebuffer -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16829) Über-jira: S3A Hadoop 3.4 features
[ https://issues.apache.org/jira/browse/HADOOP-16829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16829: Description: Über-jira: S3A features/fixes for Hadoop 3.4 As usual, this will clutter up with everything which hasn't gone in: don't interpret presence on this list as a commitment to implement. And for anyone wanting to add patches MUST # reviews via github PRs # *no declaration of AWS S3 endpoint (or other S3 impl) -no review* SHOULD # have a setup for testing SSE-KMS, DDB/S3Guard # including an assumed role we can use for AssumedRole Delegation Tokens If you are going near those bits of code, they uprate from SHOULD to MUST. was: Über-jira: S3A features/fixes for Hadoop 3.4 As usual, this will clutter up with everything which hasn't gone in: don't interpret presence on this list as a commitment to implement. And for anyone wanting to add patches MUST # reviews via github PRs # *no declaration of AWS S3 endpoint (or other S3 impl) -no review* SHOULD # have a setup for testing SSE-KMS, DDB/S3Guard # including an assumed role we can use for AssumedRole Delegation Tokens If you are going near those bits of code, they uprade from SHOULD to MUST. > Über-jira: S3A Hadoop 3.4 features > -- > > Key: HADOOP-16829 > URL: https://issues.apache.org/jira/browse/HADOOP-16829 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Über-jira: S3A features/fixes for Hadoop 3.4 > As usual, this will clutter up with everything which hasn't gone in: don't > interpret presence on this list as a commitment to implement. > And for anyone wanting to add patches > MUST > # reviews via github PRs > # *no declaration of AWS S3 endpoint (or other S3 impl) -no review* > SHOULD > # have a setup for testing SSE-KMS, DDB/S3Guard > # including an assumed role we can use for AssumedRole Delegation Tokens > If you are going near those bits of code, they uprate from SHOULD to MUST. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-16493) S3AFilesystem.initiateRename() can skip check on dest.parent status if src has same parent
[ https://issues.apache.org/jira/browse/HADOOP-16493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-16493 started by Steve Loughran. --- > S3AFilesystem.initiateRename() can skip check on dest.parent status if src > has same parent > -- > > Key: HADOOP-16493 > URL: https://issues.apache.org/jira/browse/HADOOP-16493 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > Speedup inferred from debug logs (probably not a regression from > HADOOP-15183, more something we'd not noticed). > There's a check in {{initiateRename()}} to make sure the parent dir of the > dest exists. > If dest.getParent() is src.getParent() (i.e. a same dire rename) or is any > other ancestor, we don't need those HEAD/LIST requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16920) ABFS: Make list page size configurable
[ https://issues.apache.org/jira/browse/HADOOP-16920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H updated HADOOP-16920: -- Status: Patch Available (was: Open) > ABFS: Make list page size configurable > -- > > Key: HADOOP-16920 > URL: https://issues.apache.org/jira/browse/HADOOP-16920 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilahari T H >Priority: Minor > > Make list page size configurable -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith opened a new pull request #1893: HADOOP-16920 ABFS: Make list page size configurable
bilaharith opened a new pull request #1893: HADOOP-16920 ABFS: Make list page size configurable URL: https://github.com/apache/hadoop/pull/1893 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16920) ABFS: Make list page size configurable
Bilahari T H created HADOOP-16920: - Summary: ABFS: Make list page size configurable Key: HADOOP-16920 URL: https://issues.apache.org/jira/browse/HADOOP-16920 Project: Hadoop Common Issue Type: Sub-task Reporter: Bilahari T H Make list page size configurable -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1892: HADOOP-16769 LocalDirAllocator to provide diagnostics when file creat…
hadoop-yetus commented on issue #1892: HADOOP-16769 LocalDirAllocator to provide diagnostics when file creat… URL: https://github.com/apache/hadoop/pull/1892#issuecomment-597559250 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 25m 22s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 13s | trunk passed | | +1 :green_heart: | compile | 16m 49s | trunk passed | | +1 :green_heart: | checkstyle | 0m 51s | trunk passed | | +1 :green_heart: | mvnsite | 1m 30s | trunk passed | | +1 :green_heart: | shadedclient | 16m 37s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 3s | trunk passed | | +0 :ok: | spotbugs | 2m 6s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 5s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 50s | the patch passed | | +1 :green_heart: | compile | 16m 10s | the patch passed | | +1 :green_heart: | javac | 16m 10s | the patch passed | | +1 :green_heart: | checkstyle | 0m 52s | the patch passed | | +1 :green_heart: | mvnsite | 1m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 10s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 3s | the patch passed | | +1 :green_heart: | findbugs | 2m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 10s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | The patch does not generate ASF License warnings. | | | | 131m 26s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1892/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1892 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9b5ef09f611d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cf9cf83 | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1892/1/testReport/ | | Max. process+thread count | 3045 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1892/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16919) [thirdparty] Handle release package related issues
[ https://issues.apache.org/jira/browse/HADOOP-16919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinayakumar B reassigned HADOOP-16919: -- Assignee: Vinayakumar B > [thirdparty] Handle release package related issues > -- > > Key: HADOOP-16919 > URL: https://issues.apache.org/jira/browse/HADOOP-16919 > Project: Hadoop Common > Issue Type: Bug > Components: hadoop-thirdparty >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > > Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread > here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]] > {quote}3. Yetus seems to be included in the source package. I am not sure if > it's intentional but I would remove the patchprocess directory from the > tar file. > 7. Minor nit: I would suggest to use only the filename in the sha512 > files (instead of having the /build/source/target prefix). It would help > to use `sha512 -c` command to validate the checksum. > {quote} > Also, update available artifacts in docs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16919) [thirdparty] Handle release package related issues
Vinayakumar B created HADOOP-16919: -- Summary: [thirdparty] Handle release package related issues Key: HADOOP-16919 URL: https://issues.apache.org/jira/browse/HADOOP-16919 Project: Hadoop Common Issue Type: Bug Components: hadoop-thirdparty Reporter: Vinayakumar B Handle following comments from [~elek] in 1.0.0-RC0 voting mail thread here[[https://lists.apache.org/thread.html/r1f2e8325ecef239f0d713c683a16336e2a22431a9f6bfbde3c763816%40%3Ccommon-dev.hadoop.apache.org%3E]] {quote}3. Yetus seems to be included in the source package. I am not sure if it's intentional but I would remove the patchprocess directory from the tar file. 7. Minor nit: I would suggest to use only the filename in the sha512 files (instead of having the /build/source/target prefix). It would help to use `sha512 -c` command to validate the checksum. {quote} Also, update available artifacts in docs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16895) [thirdparty] Revisit LICENSEs and NOTICEs
[ https://issues.apache.org/jira/browse/HADOOP-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17056754#comment-17056754 ] Hudson commented on HADOOP-16895: - SUCCESS: Integrated in Jenkins build Hadoop-thirdparty-trunk-commit #7 (See [https://builds.apache.org/job/Hadoop-thirdparty-trunk-commit/7/]) HADOOP-16895. [thirdparty] Revisit LICENSEs and NOTICEs (#6) (vinayakumarb: rev a11c32cd2257139275a99cc779249861833be38a) * (edit) NOTICE.txt * (edit) hadoop-shaded-jaeger/pom.xml * (add) licenses-binary/LICENSE.kotlin.txt * (add) licenses-binary/LICENSE.slf4j.txt * (add) licenses-binary/LICENSE-cddl-gplv2-ce.txt * (edit) hadoop-shaded-protobuf_3_7/pom.xml * (edit) LICENSE.txt * (add) licenses-binary/LICENSE.jetbrains.txt * (delete) licenses-binary/LICENSE-protobuf.txt * (edit) LICENSE-binary * (add) licenses-binary/LICENSE.protobuf.txt * (edit) NOTICE-binary > [thirdparty] Revisit LICENSEs and NOTICEs > - > > Key: HADOOP-16895 > URL: https://issues.apache.org/jira/browse/HADOOP-16895 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Vinayakumar B >Assignee: Vinayakumar B >Priority: Major > > LICENSE.txt and NOTICE.txt have many entries which are unrelated to > thirdparty, > Revisit and cleanup such entries. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16769) LocalDirAllocator to provide diagnostics when file creation fails
[ https://issues.apache.org/jira/browse/HADOOP-16769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17056743#comment-17056743 ] Ramesh Kumar Thangarajan commented on HADOOP-16769: --- [~gabor.bota] I have addressed the comments in the old PR. Can you please help review the new PR at [https://github.com/apache/hadoop/pull/1892/files]? > LocalDirAllocator to provide diagnostics when file creation fails > - > > Key: HADOOP-16769 > URL: https://issues.apache.org/jira/browse/HADOOP-16769 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Reporter: Ramesh Kumar Thangarajan >Priority: Minor > Attachments: HADOOP-16769.1.patch, HADOOP-16769.3.patch, > HADOOP-16769.4.patch, HADOOP-16769.5.patch, HADOOP-16769.6.patch, > HADOOP-16769.7.patch, HADOOP-16769.8.patch > > > Log details of requested size and available capacity when file creation is > not successuful -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ramesh0201 opened a new pull request #1892: HADOOP-16769 LocalDirAllocator to provide diagnostics when file creat…
ramesh0201 opened a new pull request #1892: HADOOP-16769 LocalDirAllocator to provide diagnostics when file creat… URL: https://github.com/apache/hadoop/pull/1892 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1890: HADOOP-16854 Fix to prevent OutOfMemoryException
hadoop-yetus commented on issue #1890: HADOOP-16854 Fix to prevent OutOfMemoryException URL: https://github.com/apache/hadoop/pull/1890#issuecomment-597482104 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 34s | trunk passed | | +1 :green_heart: | compile | 0m 32s | trunk passed | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 14m 50s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed | | +0 :ok: | spotbugs | 0m 52s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 50s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 24s | the patch passed | | +1 :green_heart: | javac | 0m 24s | the patch passed | | -0 :warning: | checkstyle | 0m 16s | hadoop-tools/hadoop-azure: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 5s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | | -1 :x: | findbugs | 0m 54s | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 24s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 58m 15s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-azure | | | Possible doublecheck on org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.threadExecutor in new org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream(AbfsClient, String, long, int, boolean, boolean) At AbfsOutputStream.java:new org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream(AbfsClient, String, long, int, boolean, boolean) At AbfsOutputStream.java:[lines 112-114] | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1890 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0a6b74a2b28b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cf9cf83 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/testReport/ | | Max. process+thread count | 415 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1890/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail:
[GitHub] [hadoop] hadoop-yetus commented on issue #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong
hadoop-yetus commented on issue #1889: HDFS-15215 The Timestamp for longest write/read lock held log is wrong URL: https://github.com/apache/hadoop/pull/1889#issuecomment-597481223 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 24m 52s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 1m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 34s | trunk passed | | +1 :green_heart: | compile | 17m 3s | trunk passed | | +1 :green_heart: | checkstyle | 2m 42s | trunk passed | | +1 :green_heart: | mvnsite | 2m 54s | trunk passed | | +1 :green_heart: | shadedclient | 21m 4s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 56s | trunk passed | | +0 :ok: | spotbugs | 3m 23s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 31s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 7s | the patch passed | | +1 :green_heart: | compile | 16m 24s | the patch passed | | +1 :green_heart: | javac | 16m 24s | the patch passed | | -0 :warning: | checkstyle | 2m 39s | root: The patch generated 2 new + 3 unchanged - 0 fixed = 5 total (was 3) | | +1 :green_heart: | mvnsite | 2m 54s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 4s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 0s | the patch passed | | +1 :green_heart: | findbugs | 5m 23s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 9s | hadoop-common in the patch passed. | | -1 :x: | unit | 93m 44s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | The patch does not generate ASF License warnings. | | | | 246m 35s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestAddBlockTailing | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.7 Server=19.03.7 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1889 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux efb7d44196fd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cf9cf83 | | Default Java | 1.8.0_242 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/2/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/2/testReport/ | | Max. process+thread count | 4442 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1889/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Amithsha opened a new pull request #1891: Fdp branch 3.2.1
Amithsha opened a new pull request #1891: Fdp branch 3.2.1 URL: https://github.com/apache/hadoop/pull/1891 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Amithsha closed pull request #1891: Fdp branch 3.2.1
Amithsha closed pull request #1891: Fdp branch 3.2.1 URL: https://github.com/apache/hadoop/pull/1891 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org