[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
hadoop-yetus commented on pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#issuecomment-675876739 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 28s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 29s | trunk passed | | +1 :green_heart: | compile | 20m 47s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 20m 39s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 53s | trunk passed | | +1 :green_heart: | mvnsite | 2m 47s | trunk passed | | +1 :green_heart: | shadedclient | 22m 16s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 33s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 58s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 20s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 6m 8s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 3s | the patch passed | | +1 :green_heart: | compile | 21m 15s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 21m 15s | the patch passed | | +1 :green_heart: | compile | 18m 24s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 18m 24s | the patch passed | | -0 :warning: | checkstyle | 2m 53s | root: The patch generated 15 new + 182 unchanged - 1 fixed = 197 total (was 183) | | +1 :green_heart: | mvnsite | 2m 46s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 34s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 25s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 3m 0s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 5m 55s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 50s | hadoop-common in the patch passed. | | -1 :x: | unit | 109m 58s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 53s | The patch does not generate ASF License warnings. | | | | 306m 54s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2185 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 51b05eaeb1dd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b65e43fe386 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/9/artifact/out/diff-checkstyle-root.txt | | unit | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/9/arti
[GitHub] [hadoop] hadoop-yetus commented on pull request #2228: YARN-10399 Refactor NodeQueueLoadMonitor class to make it extendable
hadoop-yetus commented on pull request #2228: URL: https://github.com/apache/hadoop/pull/2228#issuecomment-675838322 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 28s | trunk passed | | +1 :green_heart: | compile | 1m 9s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 54s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 42s | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | trunk passed | | +1 :green_heart: | shadedclient | 15m 40s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 40s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 36s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 45s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 42s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 46s | the patch passed | | +1 :green_heart: | compile | 0m 49s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 49s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 41s | the patch passed | | -0 :warning: | checkstyle | 0m 30s | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 16 new + 1 unchanged - 3 fixed = 17 total (was 4) | | +1 :green_heart: | mvnsite | 0m 46s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 55s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 33s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 1m 42s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 92m 12s | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 169m 35s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2228/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2228 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4afdf8e4fc64 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b65e43fe386 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2228/2/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2228/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2228/2/testReport/ | | Max. process+thread count | 864 (vs. ulimit of 5500)
[GitHub] [hadoop] bilaharith commented on pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…
bilaharith commented on pull request #2179: URL: https://github.com/apache/hadoop/pull/2179#issuecomment-675833767 > Test results posted have failures. Whats the plan to handle them ? PFB the JIRAs to track the same. https://issues.apache.org/jira/browse/HADOOP-17160 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith edited a comment on pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…
bilaharith edited a comment on pull request #2179: URL: https://github.com/apache/hadoop/pull/2179#issuecomment-675833767 > Test results posted have failures. Whats the plan to handle them ? PFB the JIRAs to track the same. https://issues.apache.org/jira/browse/HADOOP-17160 https://issues.apache.org/jira/browse/HADOOP-17149 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith closed pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…
bilaharith closed pull request #2179: URL: https://github.com/apache/hadoop/pull/2179 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on a change in pull request #2213: HADOOP-16915. ABFS: Ignoring the test ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance
snvijaya commented on a change in pull request #2213: URL: https://github.com/apache/hadoop/pull/2213#discussion_r472644126 ## File path: hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java ## @@ -412,6 +413,17 @@ public void testSequentialReadAfterReverseSeekPerformance() } @Test + @Ignore( + "ABFS accounts are primarily for customers to use with HNS property " Review comment: Code comment needs only the JIRA number. And the JIRA is still missing description. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] snvijaya commented on pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…
snvijaya commented on pull request #2179: URL: https://github.com/apache/hadoop/pull/2179#issuecomment-675831394 Test results posted have failures. Whats the plan to handle them ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hadoop-yetus commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675796919 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 30m 0s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 15s | trunk passed | | +1 :green_heart: | compile | 20m 46s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 17m 38s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 58s | trunk passed | | +1 :green_heart: | mvnsite | 21m 12s | trunk passed | | +1 :green_heart: | shadedclient | 15m 43s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 6m 30s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 7m 6s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 23s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 19s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 23s | branch/hadoop-project-dist no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 23m 13s | the patch passed | | +1 :green_heart: | compile | 20m 10s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | -1 :x: | cc | 20m 11s | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 28 new + 133 unchanged - 28 fixed = 161 total (was 161) | | +1 :green_heart: | golang | 20m 10s | the patch passed | | +1 :green_heart: | javac | 20m 10s | the patch passed | | +1 :green_heart: | compile | 17m 43s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | -1 :x: | cc | 17m 43s | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 24 new + 137 unchanged - 24 fixed = 161 total (was 161) | | +1 :green_heart: | golang | 17m 43s | the patch passed | | +1 :green_heart: | javac | 17m 43s | the patch passed | | -0 :warning: | checkstyle | 2m 55s | root: The patch generated 11 new + 140 unchanged - 3 fixed = 151 total (was 143) | | +1 :green_heart: | hadolint | 0m 4s | There were no new hadolint issues. | | +1 :green_heart: | mvnsite | 18m 16s | the patch passed | | +1 :green_heart: | shellcheck | 0m 2s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 14s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 5s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 47s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 6m 26s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 7m 1s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | findbugs | 0m 17s | hadoop-project has no data from findbugs | | +0 :ok: | findbugs | 0m 18s | hadoop-project-dist has no data from findbugs | ||| _ Other Tests _ | | -1 :x: | unit | 592m 36s | root in the patch passed. | | -1 :x: | asflicense | 1m 25s | The patch generated 1 ASF License warnings. | | | | 938m 38s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.applications.distributedshell.TestDistributedShell | | | hadoop.io.compress.TestCompressorDecompressor | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | hadoop.hdfs.TestErasureCodeBenchmarkThroughput | | | hadoop
[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2229: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside.
umamaheswararao commented on a change in pull request #2229: URL: https://github.com/apache/hadoop/pull/2229#discussion_r472578691 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java ## @@ -0,0 +1,1864 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.HadoopIllegalArgumentException; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.CacheFlag; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileEncryptionInfo; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsServerDefaults; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.PartialListing; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.PathHandle; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.viewfs.ViewFileSystem; +import org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse; +import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo; +import org.apache.hadoop.hdfs.protocol.CachePoolEntry; +import org.apache.hadoop.hdfs.protocol.CachePoolInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult; +import org.apache.hadoop.hdfs.protocol.EncryptionZone; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.protocol.HdfsPathHandle; +import org.apache.hadoop.hdfs.protocol.OpenFileEntry; +import org.apache.hadoop.hdfs.protocol.OpenFilesIterator; +import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; +import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; +import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; +import org.apache.hadoop.security.AccessControlException; +import org.apache.hadoop.security.token.DelegationTokenIssuer; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.util.Progressable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.FileNotFoundException; +import java.io.IOException; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; + +/** + * The ViewDistributedFileSystem is an extended class to DistributedFileSystem + * with additional mounting functionality. The goal is to have better API + * compatibility for HDFS users when using mounting + * filesystem(ViewFileSystemOverloadScheme). + * The ViewFileSystemOverloadScheme{@link ViewFileSystemOverloadScheme} is a new + * filesystem with inheri
[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2229: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside.
umamaheswararao commented on a change in pull request #2229: URL: https://github.com/apache/hadoop/pull/2229#discussion_r472578517 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java ## @@ -0,0 +1,1864 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.HadoopIllegalArgumentException; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.CacheFlag; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileEncryptionInfo; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsServerDefaults; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.PartialListing; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.PathHandle; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.viewfs.ViewFileSystem; +import org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse; +import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo; +import org.apache.hadoop.hdfs.protocol.CachePoolEntry; +import org.apache.hadoop.hdfs.protocol.CachePoolInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult; +import org.apache.hadoop.hdfs.protocol.EncryptionZone; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.protocol.HdfsPathHandle; +import org.apache.hadoop.hdfs.protocol.OpenFileEntry; +import org.apache.hadoop.hdfs.protocol.OpenFilesIterator; +import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; +import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; +import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; +import org.apache.hadoop.security.AccessControlException; +import org.apache.hadoop.security.token.DelegationTokenIssuer; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.util.Progressable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.FileNotFoundException; +import java.io.IOException; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; + +/** + * The ViewDistributedFileSystem is an extended class to DistributedFileSystem + * with additional mounting functionality. The goal is to have better API + * compatibility for HDFS users when using mounting + * filesystem(ViewFileSystemOverloadScheme). + * The ViewFileSystemOverloadScheme{@link ViewFileSystemOverloadScheme} is a new + * filesystem with inheri
[jira] [Commented] (HADOOP-17205) Move personality file from Yetus to Hadoop repository
[ https://issues.apache.org/jira/browse/HADOOP-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17180142#comment-17180142 ] Chao Sun commented on HADOOP-17205: --- Thanks [~aajisaka] for the review and commits to other branches! > Move personality file from Yetus to Hadoop repository > -- > > Key: HADOOP-17205 > URL: https://issues.apache.org/jira/browse/HADOOP-17205 > Project: Hadoop Common > Issue Type: Test > Components: test, yetus >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Fix For: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5 > > > Currently for CI build and testing we maintain personality scripts (i.e., > [here|https://github.com/apache/yetus/blob/master/precommit/src/main/shell/personality/hadoop.sh]) > in both Apache Yetus and Apache Hadoop. This poses problem when one needs to > change both places, for example HADOOP-17125. > This proposes to move the personality file into the Hadoop repo itself, so > that we can manage them in a single place. The downside for this is we may > need to duplicate the scripts in every branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache
JohnZZGithub commented on pull request #2223: URL: https://github.com/apache/hadoop/pull/2223#issuecomment-675689778 https://issues.apache.org/jira/browse/YARN-10398 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on pull request #2223: YARN-10398. Fix the bug to make sure only application master upload resource to Yarn Shared Cache
JohnZZGithub commented on pull request #2223: URL: https://github.com/apache/hadoop/pull/2223#issuecomment-675689487 @steveloughran Could you please help review the patch? Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#issuecomment-675689087 @umamaheswararao thanks for the review. I update the diff based on the comments. There might be still some issues regarding docs and comments. I will continue to watch it. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17214) Allow file system caching to be disabled for all file systems
[ https://issues.apache.org/jira/browse/HADOOP-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-17214: Summary: Allow file system caching to be disabled for all file systems (was: Allow file system cache to be disabled for all file systems) > Allow file system caching to be disabled for all file systems > - > > Key: HADOOP-17214 > URL: https://issues.apache.org/jira/browse/HADOOP-17214 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Haibo Chen >Assignee: Haibo Chen >Priority: Major > > Right now, FileSystem.get(URI uri, Configuration conf) allows caching of file > systems to be disabled per scheme. > We can introduce a new global conf to disable caching for all FileSystem, the > default would be false (or do not disable cache gobally). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17214) Allow file system cache to be disabled for all file systems
Haibo Chen created HADOOP-17214: --- Summary: Allow file system cache to be disabled for all file systems Key: HADOOP-17214 URL: https://issues.apache.org/jira/browse/HADOOP-17214 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Haibo Chen Assignee: Haibo Chen Right now, FileSystem.get(URI uri, Configuration conf) allows caching of file systems to be disabled per scheme. We can introduce a new global conf to disable caching for all FileSystem, the default would be false (or do not disable cache gobally). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r472385404 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java ## @@ -646,102 +714,222 @@ boolean isInternalDir() { } /** - * Resolve the pathname p relative to root InodeDir + * Resolve the pathname p relative to root InodeDir. * @param p - input path * @param resolveLastComponent * @return ResolveResult which allows further resolution of the remaining path * @throws FileNotFoundException */ ResolveResult resolve(final String p, final boolean resolveLastComponent) throws FileNotFoundException { -String[] path = breakIntoPathComponents(p); -if (path.length <= 1) { // special case for when path is "/" - T targetFs = root.isInternalDir() ? - getRootDir().getInternalDirFs() : getRootLink().getTargetFileSystem(); - ResolveResult res = new ResolveResult(ResultKind.INTERNAL_DIR, - targetFs, root.fullPath, SlashPath); - return res; -} +ResolveResult resolveResult = null; +resolveResult = getResolveResultFromCache(p, resolveLastComponent); +if (resolveResult != null) { + return resolveResult; +} + +try { + String[] path = breakIntoPathComponents(p); + if (path.length <= 1) { // special case for when path is "/" +T targetFs = root.isInternalDir() ? +getRootDir().getInternalDirFs() +: getRootLink().getTargetFileSystem(); +resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR, +targetFs, root.fullPath, SlashPath); +return resolveResult; + } -/** - * linkMergeSlash has been configured. The root of this mount table has - * been linked to the root directory of a file system. - * The first non-slash path component should be name of the mount table. - */ -if (root.isLink()) { - Path remainingPath; - StringBuilder remainingPathStr = new StringBuilder(); - // ignore first slash - for (int i = 1; i < path.length; i++) { -remainingPathStr.append("/").append(path[i]); + /** + * linkMergeSlash has been configured. The root of this mount table has + * been linked to the root directory of a file system. + * The first non-slash path component should be name of the mount table. + */ + if (root.isLink()) { +Path remainingPath; +StringBuilder remainingPathStr = new StringBuilder(); +// ignore first slash +for (int i = 1; i < path.length; i++) { + remainingPathStr.append("/").append(path[i]); +} +remainingPath = new Path(remainingPathStr.toString()); +resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR, +getRootLink().getTargetFileSystem(), root.fullPath, remainingPath); +return resolveResult; } - remainingPath = new Path(remainingPathStr.toString()); - ResolveResult res = new ResolveResult(ResultKind.EXTERNAL_DIR, - getRootLink().getTargetFileSystem(), root.fullPath, remainingPath); - return res; -} -Preconditions.checkState(root.isInternalDir()); -INodeDir curInode = getRootDir(); + Preconditions.checkState(root.isInternalDir()); + INodeDir curInode = getRootDir(); -int i; -// ignore first slash -for (i = 1; i < path.length - (resolveLastComponent ? 0 : 1); i++) { - INode nextInode = curInode.resolveInternal(path[i]); - if (nextInode == null) { -if (hasFallbackLink()) { - return new ResolveResult(ResultKind.EXTERNAL_DIR, - getRootFallbackLink().getTargetFileSystem(), - root.fullPath, new Path(p)); -} else { - StringBuilder failedAt = new StringBuilder(path[0]); - for (int j = 1; j <= i; ++j) { -failedAt.append('/').append(path[j]); + // Try to resolve path in the regex mount point + resolveResult = tryResolveInRegexMountpoint(p, resolveLastComponent); + if (resolveResult != null) { +return resolveResult; + } + + int i; + // ignore first slash + for (i = 1; i < path.length - (resolveLastComponent ? 0 : 1); i++) { +INode nextInode = curInode.resolveInternal(path[i]); +if (nextInode == null) { + if (hasFallbackLink()) { +resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR, +getRootFallbackLink().getTargetFileSystem(), root.fullPath, +new Path(p)); +return resolveResult; + } else { +StringBuilder failedAt = new StringBuilder(path[0]); +for (int j = 1; j <= i; ++j) { + failedAt.append('/').append(path[j]); +} +throw (new FileNotFoundException( +"File/Directory does not exist: " + failedAt.toString())); }
[GitHub] [hadoop] hadoop-yetus commented on pull request #2213: HADOOP-16915. ABFS: Ignoring the test ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance
hadoop-yetus commented on pull request #2213: URL: https://github.com/apache/hadoop/pull/2213#issuecomment-675616117 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 0s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 32s | trunk passed | | +1 :green_heart: | compile | 0m 37s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 33s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 26s | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | trunk passed | | +1 :green_heart: | shadedclient | 15m 9s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 30s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 27s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 57s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 54s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 48s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 24s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 26s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 70m 35s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2213 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b7c4e4acda96 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b65e43fe386 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/3/testReport/ | | Max. process+thread count | 425 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additi
[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hadoop-yetus commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675611383 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2201/7/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r472360734 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java ## @@ -226,7 +239,14 @@ void addLink(final String pathComponent, final INodeLink link) * Config prefix: fs.viewfs.mounttable..linkNfly * Refer: {@link Constants#CONFIG_VIEWFS_LINK_NFLY} */ -NFLY; +NFLY, +/** + * Link entry which source are regex exrepssions and target refer matched + * group from source + * Config prefix: fs.viewfs.mounttable..linkMerge Review comment: Yes, nice catch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11460) Deprecate shell vars
[ https://issues.apache.org/jira/browse/HADOOP-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Dvorzhak updated HADOOP-11460: --- Release Note: | Old | New | |: |: | | HADOOP\_HDFS\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_HDFS\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_HDFS\_NICENESS | HADOOP\_NICENESS | | HADOOP\_HDFS\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_HDFS\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_HDFS\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_HDFS\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | HADOOP\_MAPRED\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_MAPRED\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_MAPRED\_NICENESS | HADOOP\_NICENESS | | HADOOP\_MAPRED\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_MAPRED\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_MAPRED\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_MAPRED\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_CONF\_DIR | HADOOP\_CONF\_DIR | | YARN\_LOG\_DIR | HADOOP\_LOG\_DIR | | YARN\_LOGFILE | HADOOP\_LOGFILE | | YARN\_NICENESS | HADOOP\_NICENESS | | YARN\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | YARN\_PID\_DIR | HADOOP\_PID\_DIR | | YARN\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | YARN\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_OPTS | HADOOP\_OPTS | | YARN\_SLAVES | HADOOP\_SLAVES | | YARN\_USER\_CLASSPATH | HADOOP\_CLASSPATH | | YARN\_USER\_CLASSPATH\_FIRST | HADOOP\_USER\_CLASSPATH\_FIRST | | KMS\_CONFIG | HADOOP\_CONF\_DIR | | KMS\_LOG | HADOOP\_LOG\_DIR | was: ``` | Old | New | |: |: | | HADOOP\_HDFS\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_HDFS\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_HDFS\_NICENESS | HADOOP\_NICENESS | | HADOOP\_HDFS\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_HDFS\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_HDFS\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_HDFS\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | HADOOP\_MAPRED\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_MAPRED\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_MAPRED\_NICENESS | HADOOP\_NICENESS | | HADOOP\_MAPRED\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_MAPRED\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_MAPRED\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_MAPRED\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_CONF\_DIR | HADOOP\_CONF\_DIR | | YARN\_LOG\_DIR | HADOOP\_LOG\_DIR | | YARN\_LOGFILE | HADOOP\_LOGFILE | | YARN\_NICENESS | HADOOP\_NICENESS | | YARN\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | YARN\_PID\_DIR | HADOOP\_PID\_DIR | | YARN\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | YARN\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_OPTS | HADOOP\_OPTS | | YARN\_SLAVES | HADOOP\_SLAVES | | YARN\_USER\_CLASSPATH | HADOOP\_CLASSPATH | | YARN\_USER\_CLASSPATH\_FIRST | HADOOP\_USER\_CLASSPATH\_FIRST | | KMS\_CONFIG | HADOOP\_CONF\_DIR | | KMS\_LOG | HADOOP\_LOG\_DIR | ``` > Deprecate shell vars > > > Key: HADOOP-11460 > URL: https://issues.apache.org/jira/browse/HADOOP-11460 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: John Smith >Priority: Major > Labels: scripts, shell > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-11460-00.patch, HADOOP-11460-01.patch, > HADOOP-11460-02.patch, HADOOP-11460-03.patch, HADOOP-11460-04.patch > > > It is a very common shell pattern in 3.x to effectively replace sub-project > specific vars with generics. We should have a function that does this > replacement and provides a warning to the end user that the old shell var is > deprecated. Additionally, we should use this shell function to deprecate the > shell vars that are holdovers already. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11460) Deprecate shell vars
[ https://issues.apache.org/jira/browse/HADOOP-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Dvorzhak updated HADOOP-11460: --- Release Note: ``` | Old | New | |: |: | | HADOOP\_HDFS\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_HDFS\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_HDFS\_NICENESS | HADOOP\_NICENESS | | HADOOP\_HDFS\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_HDFS\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_HDFS\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_HDFS\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | HADOOP\_MAPRED\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_MAPRED\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_MAPRED\_NICENESS | HADOOP\_NICENESS | | HADOOP\_MAPRED\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_MAPRED\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_MAPRED\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_MAPRED\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_CONF\_DIR | HADOOP\_CONF\_DIR | | YARN\_LOG\_DIR | HADOOP\_LOG\_DIR | | YARN\_LOGFILE | HADOOP\_LOGFILE | | YARN\_NICENESS | HADOOP\_NICENESS | | YARN\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | YARN\_PID\_DIR | HADOOP\_PID\_DIR | | YARN\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | YARN\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_OPTS | HADOOP\_OPTS | | YARN\_SLAVES | HADOOP\_SLAVES | | YARN\_USER\_CLASSPATH | HADOOP\_CLASSPATH | | YARN\_USER\_CLASSPATH\_FIRST | HADOOP\_USER\_CLASSPATH\_FIRST | | KMS\_CONFIG | HADOOP\_CONF\_DIR | | KMS\_LOG | HADOOP\_LOG\_DIR | ``` was: | Old | New | |: |: | | HADOOP\_HDFS\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_HDFS\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_HDFS\_NICENESS | HADOOP\_NICENESS | | HADOOP\_HDFS\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_HDFS\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_HDFS\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_HDFS\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | HADOOP\_MAPRED\_LOG\_DIR | HADOOP\_LOG\_DIR | | HADOOP\_MAPRED\_LOGFILE | HADOOP\_LOGFILE | | HADOOP\_MAPRED\_NICENESS | HADOOP\_NICENESS | | HADOOP\_MAPRED\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | HADOOP\_MAPRED\_PID\_DIR | HADOOP\_PID\_DIR | | HADOOP\_MAPRED\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | HADOOP\_MAPRED\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_CONF\_DIR | HADOOP\_CONF\_DIR | | YARN\_LOG\_DIR | HADOOP\_LOG\_DIR | | YARN\_LOGFILE | HADOOP\_LOGFILE | | YARN\_NICENESS | HADOOP\_NICENESS | | YARN\_STOP\_TIMEOUT | HADOOP\_STOP\_TIMEOUT | | YARN\_PID\_DIR | HADOOP\_PID\_DIR | | YARN\_ROOT\_LOGGER | HADOOP\_ROOT\_LOGGER | | YARN\_IDENT\_STRING | HADOOP\_IDENT\_STRING | | YARN\_OPTS | HADOOP\_OPTS | | YARN\_SLAVES | HADOOP\_SLAVES | | YARN\_USER\_CLASSPATH | HADOOP\_CLASSPATH | | YARN\_USER\_CLASSPATH\_FIRST | HADOOP\_USER\_CLASSPATH\_FIRST | | KMS\_CONFIG | HADOOP\_CONF\_DIR | | KMS\_LOG | HADOOP\_LOG\_DIR | > Deprecate shell vars > > > Key: HADOOP-11460 > URL: https://issues.apache.org/jira/browse/HADOOP-11460 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: John Smith >Priority: Major > Labels: scripts, shell > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-11460-00.patch, HADOOP-11460-01.patch, > HADOOP-11460-02.patch, HADOOP-11460-03.patch, HADOOP-11460-04.patch > > > It is a very common shell pattern in 3.x to effectively replace sub-project > specific vars with generics. We should have a function that does this > replacement and provides a warning to the end user that the old shell var is > deprecated. Additionally, we should use this shell function to deprecate the > shell vars that are holdovers already. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] BilwaST commented on pull request #2228: YARN-10399 Refactor NodeQueueLoadMonitor class to make it extendable
BilwaST commented on pull request #2228: URL: https://github.com/apache/hadoop/pull/2228#issuecomment-675597847 Thanks @zhengbli for patch. Overall patch looks good me. Just a minor nit. can you add a comment in below code which you removed? ` if (!excludeFullNodes || !cNode.isQueueFull()) { retList.add(cNode);` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17213) ABFS: Test failure ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics
Bilahari T H created HADOOP-17213: - Summary: ABFS: Test failure ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics Key: HADOOP-17213 URL: https://issues.apache.org/jira/browse/HADOOP-17213 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Affects Versions: 3.4.0 Reporter: Bilahari T H The test ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics fails when the property fs.azure.test.appendblob.enabled is set to true -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hadoop-yetus commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675487358 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 40s | trunk passed | | +1 :green_heart: | compile | 25m 15s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 20m 57s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 3m 45s | trunk passed | | +1 :green_heart: | mvnsite | 25m 25s | trunk passed | | +1 :green_heart: | shadedclient | 17m 49s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 8m 20s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 9m 2s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 32s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 21s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 31s | branch/hadoop-project-dist no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 20m 50s | the patch passed | | -1 :x: | compile | 1m 18s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | cc | 1m 18s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | golang | 1m 18s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javac | 1m 18s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | compile | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | cc | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | golang | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | javac | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -0 :warning: | checkstyle | 2m 38s | root: The patch generated 11 new + 140 unchanged - 3 fixed = 151 total (was 143) | | +1 :green_heart: | mvnsite | 17m 16s | the patch passed | | +1 :green_heart: | shellcheck | 0m 2s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 18s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 5s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 8s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 6m 24s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 7m 2s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | findbugs | 0m 21s | hadoop-project has no data from findbugs | | +0 :ok: | findbugs | 0m 23s | hadoop-project-dist has no data from findbugs | ||| _ Other Tests _ | | -1 :x: | unit | 7m 6s | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 7s | The patch does not generate ASF License warnings. | | | | 304m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2201/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2201 | | Optional Tests | dupname asflicense shellcheck shelldocs mvnsite unit compile javac javadoc mvninstall shadedclient xml findbugs checkstyle cc golang | | uname | Linux b9047c9c6497 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2202: HADOOP-17191. ABFS: Run tests with all AuthTypes
hadoop-yetus commented on pull request #2202: URL: https://github.com/apache/hadoop/pull/2202#issuecomment-675482389 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 48 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 53s | trunk passed | | +1 :green_heart: | compile | 0m 37s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 32s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 38s | trunk passed | | +1 :green_heart: | shadedclient | 14m 36s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 30s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 28s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 57s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 55s | trunk passed | | -0 :warning: | patch | 1m 14s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | -1 :x: | javac | 0m 29s | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 13 unchanged - 2 fixed = 15 total (was 15) | | +1 :green_heart: | compile | 0m 24s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | -1 :x: | javac | 0m 24s | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new + 7 unchanged - 2 fixed = 9 total (was 9) | | -0 :warning: | checkstyle | 0m 18s | hadoop-tools/hadoop-azure: The patch generated 15 new + 13 unchanged - 1 fixed = 28 total (was 14) | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 51s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 59s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 1m 29s | hadoop-azure in the patch passed. | | -1 :x: | asflicense | 0m 33s | The patch generated 1 ASF License warnings. | | | | 70m 13s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.azurebfs.TestAbfsOutputStreamStatistics | | | hadoop.fs.azurebfs.TestAbfsInputStreamStatistics | | | hadoop.fs.azurebfs.services.TestAzureADAuthenticator | | | hadoop.fs.azurebfs.TestAbfsStatistics | | | hadoop.fs.azurebfs.services.TestExponentialRetryPolicy | | | hadoop.fs.azurebfs.services.TestAbfsInputStream | | | hadoop.fs.azurebfs.TestAbfsNetworkStatistics | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2202 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7301d33bab64 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b65e43fe386 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjd
[GitHub] [hadoop] hadoop-yetus commented on pull request #2202: HADOOP-17191. ABFS: Run tests with all AuthTypes
hadoop-yetus commented on pull request #2202: URL: https://github.com/apache/hadoop/pull/2202#issuecomment-675409763 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 30m 24s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 48 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 21s | trunk passed | | +1 :green_heart: | compile | 0m 33s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 28s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 22s | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | trunk passed | | +1 :green_heart: | shadedclient | 16m 33s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 22s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 54s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 52s | trunk passed | | -0 :warning: | patch | 1m 8s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 15s | hadoop-tools/hadoop-azure: The patch generated 15 new + 13 unchanged - 1 fixed = 28 total (was 14) | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 1s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 42s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 20s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 55s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 1m 18s | hadoop-azure in the patch passed. | | -1 :x: | asflicense | 0m 28s | The patch generated 1 ASF License warnings. | | | | 105m 15s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.azurebfs.TestAbfsNetworkStatistics | | | hadoop.fs.azurebfs.services.TestAbfsInputStream | | | hadoop.fs.azurebfs.TestAbfsInputStreamStatistics | | | hadoop.fs.azurebfs.TestAbfsOutputStreamStatistics | | | hadoop.fs.azurebfs.TestAbfsStatistics | | | hadoop.fs.azurebfs.services.TestAzureADAuthenticator | | | hadoop.fs.azurebfs.services.TestExponentialRetryPolicy | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2202 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c9302759a201 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / fefacf2578e | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/9/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | unit | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/9/artifact/out/patch-unit-hadoop-tools_hadoop-azure.tx
[GitHub] [hadoop] iwasakims commented on pull request #2220: HDFS-15525. Make trash root inside each snapshottable directory for WebHDFS
iwasakims commented on pull request #2220: URL: https://github.com/apache/hadoop/pull/2220#issuecomment-675398853 I merged this. Thanks, @smengcl. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims merged pull request #2220: HDFS-15525. Make trash root inside each snapshottable directory for WebHDFS
iwasakims merged pull request #2220: URL: https://github.com/apache/hadoop/pull/2220 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…
hadoop-yetus commented on pull request #2179: URL: https://github.com/apache/hadoop/pull/2179#issuecomment-675395752 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 6s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 34s | trunk passed | | +1 :green_heart: | compile | 0m 36s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 33s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | trunk passed | | +1 :green_heart: | shadedclient | 16m 51s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 30s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 28s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 58s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 55s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 49s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 28s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 72m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2179 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux fffe87c8269a 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / fefacf2578e | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/6/testReport/ | | Max. process+thread count | 401 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/6/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org -
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17179503#comment-17179503 ] Hemanth Boyina commented on HADOOP-17144: - thanks for the comment [~iwasakims] i have verified in CentOS 7, SUSE and in UBUNTU > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hadoop-yetus commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675376588 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2201/6/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17212) Improve and revise the performance description in the Tencent COS website document
[ https://issues.apache.org/jira/browse/HADOOP-17212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17179478#comment-17179478 ] Hadoop QA commented on HADOOP-17212: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} branch-3.3 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 49s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} branch-3.3 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 46m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 16s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 92m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/33/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17212 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13010003/HADOOP-17212-branch-3.3.001.patch | | Optional Tests | dupname asflicense mvnsite markdownlint | | uname | Linux e02468fa20e3 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.3 / 197219a | | Max. process+thread count | 306 (vs. ulimit of 5500) | | modules | C: hadoop-cloud-storage-project/hadoop-cos U: hadoop-cloud-storage-project/hadoop-cos | | Console output | https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/33/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. > Improve and revise the performance description in the Tencent COS website > document > -- > > Key: HADOOP-17212 > URL: https://issues.apache.org/jira/browse/HADOOP-17212 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/cos >Affects Versions: 3.3.0 >Reporter: Yang Yu >Assignee: Yang Yu >Priority: Major > Attachments: HADOOP-17212-branch-3.3.001.patch > > > Improve the description of the maximum single file size limit and revise the > performance data in the other issue section due to the performance > improvement of COS backend architecture. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
hadoop-yetus commented on pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#issuecomment-675348714 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 25s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 47s | trunk passed | | +1 :green_heart: | compile | 19m 28s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 50s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 46s | trunk passed | | +1 :green_heart: | mvnsite | 2m 57s | trunk passed | | +1 :green_heart: | shadedclient | 20m 19s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 42s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 3m 16s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 15s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 27s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 59s | the patch passed | | +1 :green_heart: | compile | 18m 50s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 18m 50s | the patch passed | | +1 :green_heart: | compile | 16m 46s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 16m 46s | the patch passed | | -0 :warning: | checkstyle | 2m 44s | root: The patch generated 14 new + 182 unchanged - 1 fixed = 196 total (was 183) | | +1 :green_heart: | mvnsite | 2m 53s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 15s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 39s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 3m 13s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 5m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 42s | hadoop-common in the patch passed. | | -1 :x: | unit | 95m 52s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | The patch does not generate ASF License warnings. | | | | 277m 14s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestGetFileChecksum | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2185 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux ff8f13716174 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / fefacf2578e | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/8/artifact/out/diff-checkstyle-root.txt | | unit | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/8/testReport/ | | Max. process+thread count | 4403 (vs. ulimit of 5500) | | mo
[GitHub] [hadoop] dbtsai commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
dbtsai commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675345307 @sunchao thanks! That helps a lot. I rebased on trunk, and modified the script to remove snappy native build. Let's see how it works. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hadoop-yetus commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675345547 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2201/5/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
hadoop-yetus commented on pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#issuecomment-675345218 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 6 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 21s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 29s | trunk passed | | +1 :green_heart: | compile | 19m 13s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 44s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 46s | trunk passed | | +1 :green_heart: | mvnsite | 2m 57s | trunk passed | | +1 :green_heart: | shadedclient | 20m 11s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 39s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 3m 8s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 3m 9s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 19s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 3s | the patch passed | | +1 :green_heart: | compile | 20m 27s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 20m 27s | the patch passed | | +1 :green_heart: | compile | 17m 20s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 20s | the patch passed | | -0 :warning: | checkstyle | 2m 46s | root: The patch generated 14 new + 182 unchanged - 1 fixed = 196 total (was 183) | | +1 :green_heart: | mvnsite | 2m 55s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 4s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 39s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 3m 13s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 5m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 54s | hadoop-common in the patch passed. | | -1 :x: | unit | 119m 11s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 3s | The patch does not generate ASF License warnings. | | | | 302m 10s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDecommissionWithBackoffMonitor | | | hadoop.hdfs.TestStripedFileAppend | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.TestMultipleNNPortQOP | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2185 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 829e31a8d922 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / fefacf2578e | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2185/7/artifact/ou
[jira] [Commented] (HADOOP-17212) Improve and revise the performance description in the Tencent COS website document
[ https://issues.apache.org/jira/browse/HADOOP-17212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17179425#comment-17179425 ] Yang Yu commented on HADOOP-17212: -- @[~weichiu], [~brahmareddy] Could you help me to review this patch, which aims to revise the Tencent COS Object Storage document. The current document may cause some confusion for some users. Thanks very much. > Improve and revise the performance description in the Tencent COS website > document > -- > > Key: HADOOP-17212 > URL: https://issues.apache.org/jira/browse/HADOOP-17212 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/cos >Affects Versions: 3.3.0 >Reporter: Yang Yu >Assignee: Yang Yu >Priority: Major > Attachments: HADOOP-17212-branch-3.3.001.patch > > > Improve the description of the maximum single file size limit and revise the > performance data in the other issue section due to the performance > improvement of COS backend architecture. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17212) Improve and revise the performance description in the Tencent COS website document
[ https://issues.apache.org/jira/browse/HADOOP-17212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Yu updated HADOOP-17212: - Attachment: HADOOP-17212-branch-3.3.001.patch Status: Patch Available (was: In Progress) Considering that the Tencent COS backend architecture has made great performance improvements, the website document needs to be revised. At the same time, we want to improve the description of the maximum size of a single file, so that avoid confusing users about the difference between the COS itself and the Hadoop-COS. > Improve and revise the performance description in the Tencent COS website > document > -- > > Key: HADOOP-17212 > URL: https://issues.apache.org/jira/browse/HADOOP-17212 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/cos >Affects Versions: 3.3.0 >Reporter: Yang Yu >Assignee: Yang Yu >Priority: Major > Attachments: HADOOP-17212-branch-3.3.001.patch > > > Improve the description of the maximum single file size limit and revise the > performance data in the other issue section due to the performance > improvement of COS backend architecture. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #2229: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside.
ayushtkn commented on a change in pull request #2229: URL: https://github.com/apache/hadoop/pull/2229#discussion_r471962803 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java ## @@ -0,0 +1,1864 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.HadoopIllegalArgumentException; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.CacheFlag; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileEncryptionInfo; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsServerDefaults; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.PartialListing; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.PathHandle; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.viewfs.ViewFileSystem; +import org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse; +import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo; +import org.apache.hadoop.hdfs.protocol.CachePoolEntry; +import org.apache.hadoop.hdfs.protocol.CachePoolInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult; +import org.apache.hadoop.hdfs.protocol.EncryptionZone; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.protocol.HdfsPathHandle; +import org.apache.hadoop.hdfs.protocol.OpenFileEntry; +import org.apache.hadoop.hdfs.protocol.OpenFilesIterator; +import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; +import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; +import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; +import org.apache.hadoop.security.AccessControlException; +import org.apache.hadoop.security.token.DelegationTokenIssuer; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.util.Progressable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.FileNotFoundException; +import java.io.IOException; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; + +/** + * The ViewDistributedFileSystem is an extended class to DistributedFileSystem + * with additional mounting functionality. The goal is to have better API + * compatibility for HDFS users when using mounting + * filesystem(ViewFileSystemOverloadScheme). + * The ViewFileSystemOverloadScheme{@link ViewFileSystemOverloadScheme} is a new + * filesystem with inherited mou