[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hadoop-yetus commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675270008 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 26m 11s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 55s | trunk passed | | +1 :green_heart: | compile | 19m 32s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 55s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 59s | trunk passed | | +1 :green_heart: | mvnsite | 20m 47s | trunk passed | | +1 :green_heart: | shadedclient | 14m 24s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 6m 49s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 7m 36s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 29s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 23s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 29s | branch/hadoop-project-dist no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 44s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 21m 32s | the patch passed | | -1 :x: | compile | 1m 17s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | cc | 1m 17s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | golang | 1m 17s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javac | 1m 17s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | compile | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | cc | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | golang | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -1 :x: | javac | 1m 12s | root in the patch failed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | -0 :warning: | checkstyle | 2m 37s | root: The patch generated 11 new + 140 unchanged - 3 fixed = 151 total (was 143) | | +1 :green_heart: | mvnsite | 17m 29s | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 19s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 5s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 10s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 6m 23s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 7m 9s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | findbugs | 0m 23s | hadoop-project has no data from findbugs | | +0 :ok: | findbugs | 0m 23s | hadoop-project-dist has no data from findbugs | ||| _ Other Tests _ | | -1 :x: | unit | 7m 3s | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 9s | The patch does not generate ASF License warnings. | | | | 302m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2201/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2201 | | Optional Tests | dupname asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle cc golang | | uname | Linux d59efa32fb0d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool |
[jira] [Commented] (HADOOP-17209) ErasureCode native library memory leak
[ https://issues.apache.org/jira/browse/HADOOP-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179358#comment-17179358 ] Fei Hui commented on HADOOP-17209: -- [~seanlook] Could you please change the caption to "Erasure coding: Native library memory leak"? I see other EC issues did that. > ErasureCode native library memory leak > -- > > Key: HADOOP-17209 > URL: https://issues.apache.org/jira/browse/HADOOP-17209 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 3.3.0, 3.2.1, 3.1.3 >Reporter: Sean Chow >Assignee: Sean Chow >Priority: Major > Attachments: HADOOP-17209.001.patch, > datanode.202137.detail_diff.5.txt, image-2020-08-15-18-26-44-744.png, > image-2020-08-17-11-26-04-276.png > > > We use both {{apache-hadoop-3.1.3}} and {{CDH-6.1.1-1.cdh6.1.1.p0.875250}} > HDFS in production, and both of them have the memory increasing over {{-Xmx}} > value. > !image-2020-08-15-18-26-44-744.png! > > We use EC strategy to to save storage costs. > This's the jvm options: > {code:java} > -Dproc_datanode -Dhdfs.audit.logger=INFO,RFAAUDIT > -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true > -Xms8589934592 -Xmx8589934592 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC > -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled > -XX:+HeapDumpOnOutOfMemoryError ...{code} > The max jvm heapsize is 8GB, but we can see the datanode RSS memory is 48g. > All the other datanodes in this hdfs cluster has the same issue. > {code:java} > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 226044 hdfs 20 0 50.6g 48g 4780 S 90.5 77.0 14728:27 > /usr/java/jdk1.8.0_162/bin/java -Dproc_datanode{code} > > This too much memory used leads to my machine unresponsive(if enable swap), > or oom-killer happens. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17202) Fix findbugs warnings in hadoop-tools on branch-2.10
[ https://issues.apache.org/jira/browse/HADOOP-17202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-17202: -- Fix Version/s: 2.10.1 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Fix findbugs warnings in hadoop-tools on branch-2.10 > > > Key: HADOOP-17202 > URL: https://issues.apache.org/jira/browse/HADOOP-17202 > Project: Hadoop Common > Issue Type: Bug >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > Fix For: 2.10.1 > > > {noformat} > M D UC_USELESS_OBJECT UC: Useless object stored in variable > keysToUpdateAsFolder of method > org.apache.hadoop.fs.azure.NativeAzureFileSystem.mkdirs(Path, FsPermission, > boolean) At NativeAzureFileSystem.java:[line 3013] > M D DLS_DEAD_LOCAL_STORE DLS: Dead store to op in > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.access(Path, FsAction) > At AzureBlobFileSystemStore.java:[line 901] > M B CO_COMPARETO_INCORRECT_FLOATING Co: > org.apache.hadoop.mapred.gridmix.InputStriper$1.compare(Map$Entry, Map$Entry) > incorrectly handles double value At InputStriper.java:[line 136] > M V MS_MUTABLE_COLLECTION_PKGPROTECT MS: > org.apache.hadoop.mapred.gridmix.emulators.resourceusage.TotalHeapUsageEmulatorPlugin$DefaultHeapUsageEmulator.heapSpace > is a mutable collection which should be package protected At > TotalHeapUsageEmulatorPlugin.java:[line 132] > M D RV_RETURN_VALUE_IGNORED_NO_SIDE_EFFECT RV: Return value of > org.codehaus.jackson.map.ObjectMapper.getJsonFactory() ignored, but method > has no side effect At JsonObjectMapperWriter.java:[line 59] > H D RV_RETURN_VALUE_IGNORED_NO_SIDE_EFFECT RV: Return value of new > org.apache.hadoop.tools.rumen.datatypes.DefaultDataType(String) ignored, but > method has no side effect At MapReduceJobPropertiesParser.java:[line 212] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims merged pull request #2214: HADOOP-17202. Fix findbugs warnings in hadoop-tools on branch-2.10.
iwasakims merged pull request #2214: URL: https://github.com/apache/hadoop/pull/2214 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2214: HADOOP-17202. Fix findbugs warnings in hadoop-tools on branch-2.10.
iwasakims commented on pull request #2214: URL: https://github.com/apache/hadoop/pull/2214#issuecomment-675249496 Thanks, @aajisaka. I merged this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao edited a comment on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
sunchao edited a comment on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675230696 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…
hadoop-yetus commented on pull request #2179: URL: https://github.com/apache/hadoop/pull/2179#issuecomment-675237006 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 34s | trunk passed | | +1 :green_heart: | compile | 0m 34s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 32s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 21s | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | trunk passed | | +1 :green_heart: | shadedclient | 15m 1s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 24s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 0s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 29s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 14m 22s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 1m 2s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 72m 4s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2179 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 63a10e73583a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b367942fe49 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | whitespace | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/5/artifact/out/whitespace-eol.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/5/testReport/ | | Max. process+thread count | 402 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/5/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use
[jira] [Commented] (HADOOP-17205) Move personality file from Yetus to Hadoop repository
[ https://issues.apache.org/jira/browse/HADOOP-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179345#comment-17179345 ] Akira Ajisaka commented on HADOOP-17205: Also updated the configs of the precommit jobs (e.g. PreCommit-HADOOP-Build) to use the script. > Move personality file from Yetus to Hadoop repository > -- > > Key: HADOOP-17205 > URL: https://issues.apache.org/jira/browse/HADOOP-17205 > Project: Hadoop Common > Issue Type: Test > Components: test, yetus >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Fix For: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5 > > > Currently for CI build and testing we maintain personality scripts (i.e., > [here|https://github.com/apache/yetus/blob/master/precommit/src/main/shell/personality/hadoop.sh]) > in both Apache Yetus and Apache Hadoop. This poses problem when one needs to > change both places, for example HADOOP-17125. > This proposes to move the personality file into the Hadoop repo itself, so > that we can manage them in a single place. The downside for this is we may > need to duplicate the scripts in every branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
sunchao commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-675230696 @dbtsai FYI #2226 has been merged - you can try build again This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16966) ABFS: Upgrade Store REST API Version to 2019-12-12
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sneha Vijayarajan updated HADOOP-16966: --- Resolution: Fixed Status: Resolved (was: Patch Available) > ABFS: Upgrade Store REST API Version to 2019-12-12 > -- > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Sneha Vijayarajan >Priority: Major > Labels: abfsactive > > Store REST API version on the backend clusters has been upgraded to > 2019-12-12. This Jira will align the Driver requests to reflect this latest > API version. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471892967 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java ## @@ -1430,4 +1432,49 @@ public void testGetContentSummaryWithFileInLocalFS() throws Exception { summaryAfter.getLength()); } } + + @Test + public void testMountPointCache() throws Exception { +conf.setInt(Constants.CONFIG_VIEWFS_PATH_RESOLUTION_CACHE_CAPACITY, 1); +conf.setBoolean("fs.viewfs.impl.disable.cache", true); Review comment: One more thing to clarify, I guess this config is for per schema level cache. Regex mount point is OK with it. Mount table is not good inner cache inside ViewFileSystem.java. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-17212) Improve and revise the performance description in the Tencent COS website document
[ https://issues.apache.org/jira/browse/HADOOP-17212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-17212 started by Yang Yu. > Improve and revise the performance description in the Tencent COS website > document > -- > > Key: HADOOP-17212 > URL: https://issues.apache.org/jira/browse/HADOOP-17212 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/cos >Affects Versions: 3.3.0 >Reporter: Yang Yu >Assignee: Yang Yu >Priority: Major > > Improve the description of the maximum single file size limit and revise the > performance data in the other issue section due to the performance > improvement of COS backend architecture. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17212) Improve and revise the performance description in the Tencent COS website document
Yang Yu created HADOOP-17212: Summary: Improve and revise the performance description in the Tencent COS website document Key: HADOOP-17212 URL: https://issues.apache.org/jira/browse/HADOOP-17212 Project: Hadoop Common Issue Type: Sub-task Components: fs/cos Affects Versions: 3.3.0 Reporter: Yang Yu Assignee: Yang Yu Improve the description of the maximum single file size limit and revise the performance data in the other issue section due to the performance improvement of COS backend architecture. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471892058 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkRegex.java ## @@ -0,0 +1,320 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.viewfs; + +import java.io.File; +import java.io.IOException; +import java.net.URI; +import java.util.Arrays; +import java.util.List; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FileSystemTestHelper; +import org.apache.hadoop.fs.FsConstants; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hdfs.DFSConfigKeys; +import org.apache.hadoop.hdfs.MiniDFSCluster; +import org.apache.hadoop.hdfs.MiniDFSNNTopology; +import org.apache.hadoop.test.GenericTestUtils; +import org.junit.AfterClass; +import org.junit.Assert; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE; +import static org.apache.hadoop.fs.viewfs.RegexMountPoint.INTERCEPTOR_INTERNAL_SEP; + +/** + * Test linkRegex node type for view file system. + */ +public class TestViewFileSystemLinkRegex extends ViewFileSystemBaseTest { + public static final Logger LOGGER = + LoggerFactory.getLogger(TestViewFileSystemLinkRegex.class); + + private static FileSystem fsDefault; + private static MiniDFSCluster cluster; + private static Configuration clusterConfig; + private static final int NAME_SPACES_COUNT = 3; + private static final int DATA_NODES_COUNT = 3; + private static final int FS_INDEX_DEFAULT = 0; + private static final FileSystem[] FS_HDFS = new FileSystem[NAME_SPACES_COUNT]; + private static final String CLUSTER_NAME = + "TestViewFileSystemLinkRegexCluster"; + private static final File TEST_DIR = GenericTestUtils + .getTestDir(TestViewFileSystemLinkRegex.class.getSimpleName()); + private static final String TEST_BASE_PATH = + "/tmp/TestViewFileSystemLinkRegex"; + + @Override + protected FileSystemTestHelper createFileSystemHelper() { +return new FileSystemTestHelper(TEST_BASE_PATH); + } + + @BeforeClass + public static void clusterSetupAtBeginning() throws IOException { +SupportsBlocks = true; +clusterConfig = ViewFileSystemTestSetup.createConfig(); +clusterConfig.setBoolean( +DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY, +true); +cluster = new MiniDFSCluster.Builder(clusterConfig).nnTopology( +MiniDFSNNTopology.simpleFederatedTopology(NAME_SPACES_COUNT)) +.numDataNodes(DATA_NODES_COUNT).build(); +cluster.waitClusterUp(); + +for (int i = 0; i < NAME_SPACES_COUNT; i++) { + FS_HDFS[i] = cluster.getFileSystem(i); +} +fsDefault = FS_HDFS[FS_INDEX_DEFAULT]; + } + + @AfterClass + public static void clusterShutdownAtEnd() throws Exception { +if (cluster != null) { + cluster.shutdown(); +} + } + + @Override + @Before + public void setUp() throws Exception { +fsTarget = fsDefault; +super.setUp(); + } + + /** + * Override this so that we don't set the targetTestRoot to any path under the + * root of the FS, and so that we don't try to delete the test dir, but rather + * only its contents. + */ + @Override + void initializeTargetTestRoot() throws IOException { +targetTestRoot = fsDefault.makeQualified(new Path("/")); +for (FileStatus status : fsDefault.listStatus(targetTestRoot)) { + fsDefault.delete(status.getPath(), true); +} + } + + @Override + void setupMountPoints() { +super.setupMountPoints(); + } + + @Override + int getExpectedDelegationTokenCount() { +return 1; // all point to the same fs so 1 unique token + } + + @Override + int getExpectedDelegationTokenCountWithCredentials() { +return 1; + } + + public String buildReplaceInterceptorSettingString(String srcRegex, + String
[jira] [Updated] (HADOOP-17205) Move personality file from Yetus to Hadoop repository
[ https://issues.apache.org/jira/browse/HADOOP-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17205: --- Fix Version/s: 3.1.5 3.4.0 3.3.1 2.10.1 3.2.2 2.9.3 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Merged the PR into all the active branches. Thank you [~csun] for your contribution! > Move personality file from Yetus to Hadoop repository > -- > > Key: HADOOP-17205 > URL: https://issues.apache.org/jira/browse/HADOOP-17205 > Project: Hadoop Common > Issue Type: Test > Components: test, yetus >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > Fix For: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5 > > > Currently for CI build and testing we maintain personality scripts (i.e., > [here|https://github.com/apache/yetus/blob/master/precommit/src/main/shell/personality/hadoop.sh]) > in both Apache Yetus and Apache Hadoop. This poses problem when one needs to > change both places, for example HADOOP-17125. > This proposes to move the personality file into the Hadoop repo itself, so > that we can manage them in a single place. The downside for this is we may > need to duplicate the scripts in every branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #2226: HADOOP-17205. Move personality file from Yetus to Hadoop repository
aajisaka merged pull request #2226: URL: https://github.com/apache/hadoop/pull/2226 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2226: HADOOP-17205. Move personality file from Yetus to Hadoop repository
hadoop-yetus commented on pull request #2226: URL: https://github.com/apache/hadoop/pull/2226#issuecomment-675219867 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 31m 10s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 13s | trunk passed | | +1 :green_heart: | mvnsite | 20m 13s | trunk passed | | +1 :green_heart: | shadedclient | 14m 8s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 18m 58s | the patch passed | | +1 :green_heart: | mvnsite | 17m 6s | the patch passed | | +1 :green_heart: | shellcheck | 0m 1s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 18s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 7s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 20m 0s | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 3s | The patch does not generate ASF License warnings. | | | | 183m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2226 | | Optional Tests | dupname asflicense shellcheck shelldocs mvnsite unit | | uname | Linux 7cca94bd6377 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b367942fe49 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/3/testReport/ | | Max. process+thread count | 414 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/3/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17205) Move personality file from Yetus to Hadoop repository
[ https://issues.apache.org/jira/browse/HADOOP-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17205: --- Target Version/s: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5 Status: Patch Available (was: Open) > Move personality file from Yetus to Hadoop repository > -- > > Key: HADOOP-17205 > URL: https://issues.apache.org/jira/browse/HADOOP-17205 > Project: Hadoop Common > Issue Type: Test > Components: test, yetus >Reporter: Chao Sun >Assignee: Chao Sun >Priority: Major > > Currently for CI build and testing we maintain personality scripts (i.e., > [here|https://github.com/apache/yetus/blob/master/precommit/src/main/shell/personality/hadoop.sh]) > in both Apache Yetus and Apache Hadoop. This poses problem when one needs to > change both places, for example HADOOP-17125. > This proposes to move the personality file into the Hadoop repo itself, so > that we can manage them in a single place. The downside for this is we may > need to duplicate the scripts in every branch. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471866865 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkRegex.java ## @@ -0,0 +1,320 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.viewfs; + +import java.io.File; +import java.io.IOException; +import java.net.URI; +import java.util.Arrays; +import java.util.List; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FileSystemTestHelper; +import org.apache.hadoop.fs.FsConstants; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hdfs.DFSConfigKeys; +import org.apache.hadoop.hdfs.MiniDFSCluster; +import org.apache.hadoop.hdfs.MiniDFSNNTopology; +import org.apache.hadoop.test.GenericTestUtils; +import org.junit.AfterClass; +import org.junit.Assert; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.hadoop.fs.viewfs.Constants.CONFIG_VIEWFS_ENABLE_INNER_CACHE; +import static org.apache.hadoop.fs.viewfs.RegexMountPoint.INTERCEPTOR_INTERNAL_SEP; + +/** + * Test linkRegex node type for view file system. + */ +public class TestViewFileSystemLinkRegex extends ViewFileSystemBaseTest { + public static final Logger LOGGER = + LoggerFactory.getLogger(TestViewFileSystemLinkRegex.class); + + private static FileSystem fsDefault; + private static MiniDFSCluster cluster; + private static Configuration clusterConfig; + private static final int NAME_SPACES_COUNT = 3; + private static final int DATA_NODES_COUNT = 3; + private static final int FS_INDEX_DEFAULT = 0; + private static final FileSystem[] FS_HDFS = new FileSystem[NAME_SPACES_COUNT]; + private static final String CLUSTER_NAME = + "TestViewFileSystemLinkRegexCluster"; + private static final File TEST_DIR = GenericTestUtils + .getTestDir(TestViewFileSystemLinkRegex.class.getSimpleName()); + private static final String TEST_BASE_PATH = + "/tmp/TestViewFileSystemLinkRegex"; + + @Override + protected FileSystemTestHelper createFileSystemHelper() { +return new FileSystemTestHelper(TEST_BASE_PATH); + } + + @BeforeClass + public static void clusterSetupAtBeginning() throws IOException { +SupportsBlocks = true; +clusterConfig = ViewFileSystemTestSetup.createConfig(); +clusterConfig.setBoolean( +DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY, +true); +cluster = new MiniDFSCluster.Builder(clusterConfig).nnTopology( +MiniDFSNNTopology.simpleFederatedTopology(NAME_SPACES_COUNT)) +.numDataNodes(DATA_NODES_COUNT).build(); +cluster.waitClusterUp(); + +for (int i = 0; i < NAME_SPACES_COUNT; i++) { + FS_HDFS[i] = cluster.getFileSystem(i); +} +fsDefault = FS_HDFS[FS_INDEX_DEFAULT]; + } + + @AfterClass + public static void clusterShutdownAtEnd() throws Exception { +if (cluster != null) { + cluster.shutdown(); +} + } + + @Override + @Before + public void setUp() throws Exception { +fsTarget = fsDefault; +super.setUp(); + } + + /** + * Override this so that we don't set the targetTestRoot to any path under the + * root of the FS, and so that we don't try to delete the test dir, but rather + * only its contents. + */ + @Override + void initializeTargetTestRoot() throws IOException { +targetTestRoot = fsDefault.makeQualified(new Path("/")); +for (FileStatus status : fsDefault.listStatus(targetTestRoot)) { + fsDefault.delete(status.getPath(), true); +} + } + + @Override + void setupMountPoints() { +super.setupMountPoints(); + } + + @Override + int getExpectedDelegationTokenCount() { +return 1; // all point to the same fs so 1 unique token + } + + @Override + int getExpectedDelegationTokenCountWithCredentials() { +return 1; + } + + public String buildReplaceInterceptorSettingString(String srcRegex, + String
[GitHub] [hadoop] chimney-lee closed pull request #1618: MAPREDUCE-7240.Exception 'Invalid event: TA_TOO_MANY_FETCH_FAILURE at…
chimney-lee closed pull request #1618: URL: https://github.com/apache/hadoop/pull/1618 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jimmy-zuber-amzn commented on a change in pull request #2069: HADOOP-16830. IOStatistics API.
jimmy-zuber-amzn commented on a change in pull request #2069: URL: https://github.com/apache/hadoop/pull/2069#discussion_r471833875 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSInputStream.java ## @@ -134,4 +137,23 @@ public void readFully(long position, byte[] buffer) throws IOException { readFully(position, buffer, 0, buffer.length); } + + /** + * toString method returns the superclass toString, but if the subclass + * implements {@link IOStatisticsSource} then those statistics are + * extracted and included in the output. + * That is: statistics of subclasses are automatically reported. + * @return a string value. + */ + @Override + public String toString() { +final StringBuilder sb = new StringBuilder(super.toString()); +sb.append('{'); +if (this instanceof IOStatisticsSource) { + sb.append(IOStatisticsLogging.ioStatisticsSourceToString( + (IOStatisticsSource) this)); +} +sb.append('}'); +return sb.toString(); + } Review comment: It looks like this IOStatistics API primarily displays information through toString() messages in log statements. Is this the best way to integrate with Hadoop - or is there value to adding an explicit (possibly type safe) interface for objects to display their IOStatistics information? Also - does it make sense here to move the calls to `sb#append` inside of the if statement? That way, non instances will not have their string representation changed. ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/statistics/impl/EvaluatingStatisticsMap.java ## @@ -0,0 +1,191 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.statistics.impl; + +import java.io.Serializable; +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.TreeMap; +import java.util.function.Function; +import java.util.stream.Collectors; + +/** + * A map of functions which can be invoked to dynamically + * create the value of an entry. + * @param type of entry value. + */ +final class EvaluatingStatisticsMap implements +Map { + + /** + * Functions to invoke when evaluating keys. + */ + private final Map> evaluators + = new TreeMap<>(); + + /** + * Function to use when copying map values. + */ + private final Function copyFn; + + /** + * Name for use in getter/error messages. + */ + private final String name; + + EvaluatingStatisticsMap(final String name) { +this(name, IOStatisticsBinding::passthroughFn); + } + + EvaluatingStatisticsMap(final String name, + final Function copyFn) { +this.name = name; +this.copyFn = copyFn; + } + + /** + * add a mapping of a key to a function. + * @param key the key + * @param eval the evaluator + */ + void addFunction(String key, Function eval) { +evaluators.put(key, eval); + } Review comment: Since `evaluators` is a `TreeMap`, concurrent calls to this method may result in incorrect results due to `TreeMap` not being threadsafe. Additionally, calls to `#keySet`, `#entries`, and `#snapshot` may see `ConcurrentModificationException`s. Given that there's no ordering requirement in the representation of this map, could a `ConcurrentHashMap` be used instead? ## File path: hadoop-common-project/hadoop-common/src/site/markdown/filesystem/iostatistics.md ## @@ -0,0 +1,432 @@ + + +# Statistic collection with the IOStatistics API + +```java +@InterfaceAudience.Public +@InterfaceStability.Unstable +``` + +The `IOStatistics` API is intended to provide statistics on individual IO +classes -such as input and output streams, *in a standard way which +applications can query* + +Many filesystem-related classes have implemented statistics gathering +and provided private/unstable ways to query this, but as they were +not common across implementations it was unsafe for applications +to reference these values. Example: `S3AInputStream` and its statistics +API. This is used in internal tests, but cannot be used downstream in +applications such as Apache Hive or Apache HBase. + +The
[GitHub] [hadoop] hadoop-yetus commented on pull request #2226: HADOOP-17205. Move personality file from Yetus to Hadoop repository
hadoop-yetus commented on pull request #2226: URL: https://github.com/apache/hadoop/pull/2226#issuecomment-675172510 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/3/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on a change in pull request #2226: HADOOP-17205. Move personality file from Yetus to Hadoop repository
sunchao commented on a change in pull request #2226: URL: https://github.com/apache/hadoop/pull/2226#discussion_r471833557 ## File path: Jenkinsfile ## @@ -96,8 +96,7 @@ pipeline { YETUS_ARGS+=("--basedir=${WORKSPACE}/${SOURCEDIR}") # our project defaults come from a personality file -# which will get loaded automatically by setting the project name -YETUS_ARGS+=("--project=hadoop") Review comment: Sure. Will add back. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471831712 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md ## @@ -366,6 +366,82 @@ Don't want to change scheme or difficult to copy mount-table configurations to a Please refer to the [View File System Overload Scheme Guide](./ViewFsOverloadScheme.html) +Regex Pattern Based Mount Points + + +The view file system mount points were a Key-Value based mapping system. It is not friendly for user cases which mapping config could be abstracted to rules. E.g. Users want to provide a GCS bucket per user and there might be thousands of users in total. The old key-value based approach won't work well for several reasons: + +1. The mount table is used by FileSystem clients. There's a cost to spread the config to all clients and we should avoid it if possible. The [View File System Overload Scheme Guide](./ViewFsOverloadScheme.html) could help the distribution by central mount table management. But the mount table still have to be updated on every change. The change could be greatly avoided if provide a rule-based mount table.. + +2. The client have to understand all the KVs in the mount table. This is not ideal when the mountable grows to thousands of items. E.g. thousands of file systems might be initialized even users only need one. And the config itself will become bloated at scale. + +### Understand the Difference + +In the key-value based mount table, view file system treats every mount point as a partition. There's several file system APIs which will lead to operation on all partitions. E.g. there's an HDFS cluster with multiple mount. Users want to run “hadoop fs -put file viewfs://hdfs.namenode.apache.org/tmp/” cmd to copy data from local disk to our HDFS cluster. The cmd will trigger ViewFileSystem to call setVerifyChecksum() method which will initialize the file system for every mount point. +For a regex-base rule mount table entry, we couldn't know what's corresponding path until parsing. So the regex based mount table entry will be ignored on such cases and the file system will be created upon accessing. The inner cache of ViewFs is also not available for regex-base mount points now as it assumes target file system doesn't change after viewfs initialization. Please disable it if you want to use regex-base mount table. We also need to change the rename strategy to SAME_FILESYSTEM_ACROSS_MOUNTPOINT for the same reason. +```xml + +fs.viewfs.enable.inner.cache +false Review comment: Good call. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471830882 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ViewFs.md ## @@ -366,6 +366,82 @@ Don't want to change scheme or difficult to copy mount-table configurations to a Please refer to the [View File System Overload Scheme Guide](./ViewFsOverloadScheme.html) +Regex Pattern Based Mount Points + + +The view file system mount points were a Key-Value based mapping system. It is not friendly for user cases which mapping config could be abstracted to rules. E.g. Users want to provide a GCS bucket per user and there might be thousands of users in total. The old key-value based approach won't work well for several reasons: + +1. The mount table is used by FileSystem clients. There's a cost to spread the config to all clients and we should avoid it if possible. The [View File System Overload Scheme Guide](./ViewFsOverloadScheme.html) could help the distribution by central mount table management. But the mount table still have to be updated on every change. The change could be greatly avoided if provide a rule-based mount table.. + +2. The client have to understand all the KVs in the mount table. This is not ideal when the mountable grows to thousands of items. E.g. thousands of file systems might be initialized even users only need one. And the config itself will become bloated at scale. Review comment: I didn't realize that there is a jira. This is great. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471830630 ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java ## @@ -1430,4 +1432,49 @@ public void testGetContentSummaryWithFileInLocalFS() throws Exception { summaryAfter.getLength()); } } + + @Test + public void testMountPointCache() throws Exception { +conf.setInt(Constants.CONFIG_VIEWFS_PATH_RESOLUTION_CACHE_CAPACITY, 1); +conf.setBoolean("fs.viewfs.impl.disable.cache", true); Review comment: @umamaheswararao This is a good point. Let me add some preconditions check. BTW, now the inner cache assumes every filesystem is created while InodeTree is constructed and never changed. Do you think it's reasonable to change it to a concurrent hash map? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471825828 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java ## @@ -166,6 +166,41 @@ public static void addLinkNfly(final Configuration conf, final String src, addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets); } + + /** + * Add a LinkRegex to the config for the specified mount table. Review comment: Make sense, thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class
hadoop-yetus commented on pull request #2197: URL: https://github.com/apache/hadoop/pull/2197#issuecomment-675146633 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 18m 39s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 57s | trunk passed | | +1 :green_heart: | compile | 21m 37s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 17m 50s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 51s | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | trunk passed | | +1 :green_heart: | shadedclient | 17m 8s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 32s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 28s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 2m 21s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 19s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | the patch passed | | +1 :green_heart: | compile | 20m 58s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 20m 58s | the patch passed | | +1 :green_heart: | compile | 17m 44s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 44s | the patch passed | | -0 :warning: | checkstyle | 0m 50s | hadoop-common-project/hadoop-common: The patch generated 1 new + 85 unchanged - 0 fixed = 86 total (was 85) | | +1 :green_heart: | mvnsite | 1m 23s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch 1 line(s) with tabs. | | +1 :green_heart: | shadedclient | 15m 30s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 31s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 27s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 2m 23s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 43s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | The patch does not generate ASF License warnings. | | | | 185m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2197 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux cf74f1a7feac 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b367942fe49 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/3/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | whitespace | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/3/artifact/out/whitespace-tabs.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/3/testReport/ | | Max. process+thread count | 2881 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from
[jira] [Updated] (HADOOP-17169) Remove whitelist/blacklist terminology from Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-17169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HADOOP-17169: - Attachment: HADOOP-17169.001.patch > Remove whitelist/blacklist terminology from Hadoop Common > - > > Key: HADOOP-17169 > URL: https://issues.apache.org/jira/browse/HADOOP-17169 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Eric Badger >Assignee: Eric Badger >Priority: Major > Attachments: HADOOP-17169.001.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17169) Remove whitelist/blacklist terminology from Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-17169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17179254#comment-17179254 ] Eric Badger commented on HADOOP-17169: -- In patch 001, I changed all instances of whitelist/blacklist to allowlist/denylist in hadoop-common. The obvious downside of this is that the config keys are now incompatible changes. We could potentially keep backwards compatibility for these old config keys, but I feel like that'll just lead to us kicking this down the road and never actually removing the old keys. Almost all of the changes are in hadoop-common, but there are a few instances of changes in hadoop-hdfs-project due to changes that were made in common. Major changes: ||Old Config Key||New Config Key|| |hadoop.http.authentication.kerberos.endpoint.whitelist|hadoop.http.authentication.kerberos.endpoint.allowlist| |hadoop.security.sasl.fixedwhitelist.file|hadoop.security.sasl.fixedallowlist.file| |hadoop.security.sasl.variablewhitelist.enable|hadoop.security.sasl.variableallowlist.enable| |hadoop.security.sasl.variablewhitelist.file|hadoop.security.sasl.variableallowlist.file| |hadoop.security.sasl.variablewhitelist.cache.secs|hadoop.security.sasl.variableallowlist.cache.secs| |hadoop.rpc.protection.non-whitelist|hadoop.rpc.protection.non-allowlist| |hadoop.kms.blacklist.*|hadoop.kms.denylist.*| |whitelist.key.acl.*|allowlist.key.acl.*| ||Old File Name||New File name|| |WhitelistBasedResolver.java|AllowlistBasedResolver.java| |CombinedIPWhiteList.java|CombinedIPAllowList.java| |TestWhitelistBasedResolver.java|TestAllowlistBasedResolver.java| > Remove whitelist/blacklist terminology from Hadoop Common > - > > Key: HADOOP-17169 > URL: https://issues.apache.org/jira/browse/HADOOP-17169 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Eric Badger >Assignee: Eric Badger >Priority: Major > Attachments: HADOOP-17169.001.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class
hadoop-yetus commented on pull request #2197: URL: https://github.com/apache/hadoop/pull/2197#issuecomment-675142423 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 26m 11s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 11s | trunk passed | | +1 :green_heart: | compile | 20m 37s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 18m 24s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 52s | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | trunk passed | | +1 :green_heart: | shadedclient | 17m 4s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 36s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 32s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 2m 13s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 11s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 53s | the patch passed | | +1 :green_heart: | compile | 20m 30s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 20m 30s | the patch passed | | +1 :green_heart: | compile | 18m 24s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 18m 24s | the patch passed | | -0 :warning: | checkstyle | 0m 47s | hadoop-common-project/hadoop-common: The patch generated 1 new + 85 unchanged - 0 fixed = 86 total (was 85) | | +1 :green_heart: | mvnsite | 1m 22s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch 1 line(s) with tabs. | | +1 :green_heart: | shadedclient | 13m 59s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 30s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 29s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 2m 19s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 52s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | The patch does not generate ASF License warnings. | | | | 191m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2197 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d06ad23d11e8 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / b367942fe49 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | whitespace | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/2/artifact/out/whitespace-tabs.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/2/testReport/ | | Max. process+thread count | 1378 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2197/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from
[GitHub] [hadoop] killerwhile closed pull request #1728: HADOOP-16728. Avoid removing all the channels from connectionInfo when borrowing from the pool
killerwhile closed pull request #1728: URL: https://github.com/apache/hadoop/pull/1728 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] killerwhile commented on pull request #1728: HADOOP-16728. Avoid removing all the channels from connectionInfo when borrowing from the pool
killerwhile commented on pull request #1728: URL: https://github.com/apache/hadoop/pull/1728#issuecomment-675091155 The problem seems to be addressed in https://issues.apache.org/jira/browse/HADOOP-15358, which has been released in Hadoop 3.3.0. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2231: HADOOP-17210 Backport HADOOP-15691 PathCapabilities
hadoop-yetus commented on pull request #2231: URL: https://github.com/apache/hadoop/pull/2231#issuecomment-675089972 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 41s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 7 new or modified test files. | ||| _ branch-3.2 Compile Tests _ | | +0 :ok: | mvndep | 3m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 26s | branch-3.2 passed | | +1 :green_heart: | compile | 14m 17s | branch-3.2 passed | | +1 :green_heart: | checkstyle | 2m 42s | branch-3.2 passed | | +1 :green_heart: | mvnsite | 5m 12s | branch-3.2 passed | | +1 :green_heart: | shadedclient | 21m 29s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 17s | branch-3.2 passed | | +0 :ok: | spotbugs | 0m 43s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 7m 43s | branch-3.2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 32s | the patch passed | | +1 :green_heart: | compile | 13m 56s | the patch passed | | +1 :green_heart: | javac | 13m 56s | the patch passed | | -0 :warning: | checkstyle | 2m 42s | root: The patch generated 4 new + 594 unchanged - 1 fixed = 598 total (was 595) | | +1 :green_heart: | mvnsite | 5m 9s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 12m 19s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 15s | the patch passed | | +1 :green_heart: | findbugs | 8m 33s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 8m 57s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 1s | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 4m 55s | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | unit | 4m 46s | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 1m 36s | hadoop-azure in the patch passed. | | +1 :green_heart: | unit | 1m 6s | hadoop-azure-datalake in the patch passed. | | +1 :green_heart: | asflicense | 0m 51s | The patch does not generate ASF License warnings. | | | | 154m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2231 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 317d8cd18428 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.2 / 9aa78fe | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/2/artifact/out/diff-checkstyle-root.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/2/testReport/ | | Max. process+thread count | 1374 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure hadoop-tools/hadoop-azure-datalake U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
umamaheswararao commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r471268265 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ConfigUtil.java ## @@ -166,6 +166,41 @@ public static void addLinkNfly(final Configuration conf, final String src, addLinkNfly(conf, getDefaultMountTableName(conf), src, null, targets); } + + /** + * Add a LinkRegex to the config for the specified mount table. Review comment: Below params javadoc does not help anything I think. If you want to add, please add description( most of that params seems self explanatory ), otherwise just remove params part. ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java ## @@ -646,102 +714,222 @@ boolean isInternalDir() { } /** - * Resolve the pathname p relative to root InodeDir + * Resolve the pathname p relative to root InodeDir. * @param p - input path * @param resolveLastComponent * @return ResolveResult which allows further resolution of the remaining path * @throws FileNotFoundException */ ResolveResult resolve(final String p, final boolean resolveLastComponent) throws FileNotFoundException { -String[] path = breakIntoPathComponents(p); -if (path.length <= 1) { // special case for when path is "/" - T targetFs = root.isInternalDir() ? - getRootDir().getInternalDirFs() : getRootLink().getTargetFileSystem(); - ResolveResult res = new ResolveResult(ResultKind.INTERNAL_DIR, - targetFs, root.fullPath, SlashPath); - return res; -} +ResolveResult resolveResult = null; +resolveResult = getResolveResultFromCache(p, resolveLastComponent); +if (resolveResult != null) { + return resolveResult; +} + +try { + String[] path = breakIntoPathComponents(p); + if (path.length <= 1) { // special case for when path is "/" +T targetFs = root.isInternalDir() ? +getRootDir().getInternalDirFs() +: getRootLink().getTargetFileSystem(); +resolveResult = new ResolveResult(ResultKind.INTERNAL_DIR, +targetFs, root.fullPath, SlashPath); +return resolveResult; + } -/** - * linkMergeSlash has been configured. The root of this mount table has - * been linked to the root directory of a file system. - * The first non-slash path component should be name of the mount table. - */ -if (root.isLink()) { - Path remainingPath; - StringBuilder remainingPathStr = new StringBuilder(); - // ignore first slash - for (int i = 1; i < path.length; i++) { -remainingPathStr.append("/").append(path[i]); + /** + * linkMergeSlash has been configured. The root of this mount table has + * been linked to the root directory of a file system. + * The first non-slash path component should be name of the mount table. + */ + if (root.isLink()) { +Path remainingPath; +StringBuilder remainingPathStr = new StringBuilder(); +// ignore first slash +for (int i = 1; i < path.length; i++) { + remainingPathStr.append("/").append(path[i]); +} +remainingPath = new Path(remainingPathStr.toString()); +resolveResult = new ResolveResult(ResultKind.EXTERNAL_DIR, +getRootLink().getTargetFileSystem(), root.fullPath, remainingPath); +return resolveResult; } - remainingPath = new Path(remainingPathStr.toString()); - ResolveResult res = new ResolveResult(ResultKind.EXTERNAL_DIR, - getRootLink().getTargetFileSystem(), root.fullPath, remainingPath); - return res; -} -Preconditions.checkState(root.isInternalDir()); -INodeDir curInode = getRootDir(); + Preconditions.checkState(root.isInternalDir()); + INodeDir curInode = getRootDir(); -int i; -// ignore first slash -for (i = 1; i < path.length - (resolveLastComponent ? 0 : 1); i++) { - INode nextInode = curInode.resolveInternal(path[i]); - if (nextInode == null) { -if (hasFallbackLink()) { - return new ResolveResult(ResultKind.EXTERNAL_DIR, - getRootFallbackLink().getTargetFileSystem(), - root.fullPath, new Path(p)); -} else { - StringBuilder failedAt = new StringBuilder(path[0]); - for (int j = 1; j <= i; ++j) { -failedAt.append('/').append(path[j]); + // Try to resolve path in the regex mount point + resolveResult = tryResolveInRegexMountpoint(p, resolveLastComponent); + if (resolveResult != null) { +return resolveResult; + } + + int i; + // ignore first slash + for (i = 1; i < path.length - (resolveLastComponent ? 0 : 1); i++) { +INode nextInode = curInode.resolveInternal(path[i]); +if (nextInode ==
[GitHub] [hadoop] sguggilam commented on a change in pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class
sguggilam commented on a change in pull request #2197: URL: https://github.com/apache/hadoop/pull/2197#discussion_r471718232 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java ## @@ -1238,18 +1238,19 @@ public void reloginFromKeytab() throws IOException { * This method assumes that {@link #loginUserFromKeytab(String, String)} had * happened already. The Subject field of this UserGroupInformation object is * updated to have the new credentials. - * - * @param forceRelogin Fore re-login irrespective of the time of last login + * + * @param ignoreTimeElapsed Fore re-login irrespective of the time of last Review comment: yes my bad, it's a typo, fixed it This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on a change in pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class
liuml07 commented on a change in pull request #2197: URL: https://github.com/apache/hadoop/pull/2197#discussion_r471716885 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java ## @@ -1238,18 +1238,19 @@ public void reloginFromKeytab() throws IOException { * This method assumes that {@link #loginUserFromKeytab(String, String)} had * happened already. The Subject field of this UserGroupInformation object is * updated to have the new credentials. - * - * @param forceRelogin Fore re-login irrespective of the time of last login + * + * @param ignoreTimeElapsed Fore re-login irrespective of the time of last Review comment: nit: `Fore` you mean `Force`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sguggilam commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class
sguggilam commented on pull request #2197: URL: https://github.com/apache/hadoop/pull/2197#issuecomment-675053538 Yes @liuml07 that makes sense, let me update the PR This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2231: HADOOP-17210 Backport HADOOP-15691 PathCapabilities
steveloughran commented on pull request #2231: URL: https://github.com/apache/hadoop/pull/2231#issuecomment-675042886 did an s3a test run for strictness; throttle tests failed because the S3guard table was pay-on-demand. Probably time to just cut that test, but not here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2213: HADOOP-16915. ABFS: Ignoring the test ITestAzureBlobFileSystemRandomRead.testRandomReadPerformance
hadoop-yetus commented on pull request #2213: URL: https://github.com/apache/hadoop/pull/2213#issuecomment-675017807 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 30m 38s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 17s | trunk passed | | +1 :green_heart: | compile | 0m 33s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 28s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 22s | trunk passed | | +1 :green_heart: | mvnsite | 0m 33s | trunk passed | | +1 :green_heart: | shadedclient | 17m 7s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 23s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 53s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 52s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 27s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 23s | the patch passed | | -0 :warning: | checkstyle | 0m 15s | hadoop-tools/hadoop-azure: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) | | +1 :green_heart: | mvnsite | 0m 26s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 34s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 22s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 20s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 57s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 24s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 105m 42s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2213 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ea0c585ce751 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 092bfe7c8e9 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/2/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2213/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about
[GitHub] [hadoop] hadoop-yetus commented on pull request #2179: HADOOP-17166. ABFS: making max concurrent requests and max requests that can be que…
hadoop-yetus commented on pull request #2179: URL: https://github.com/apache/hadoop/pull/2179#issuecomment-675015282 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 46s | trunk passed | | +1 :green_heart: | compile | 0m 39s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 33s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | trunk passed | | +1 :green_heart: | shadedclient | 14m 55s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 33s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 28s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 0m 57s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 55s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 13m 51s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 24s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 0m 56s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 70m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2179 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 2a5aca16f1f9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 092bfe7c8e9 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | whitespace | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/4/artifact/out/whitespace-eol.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/4/testReport/ | | Max. process+thread count | 423 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2179/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use
[GitHub] [hadoop] DadanielZ merged pull request #2222: HADOOP-16966. ABFS: Upgrade store REST API version to 2019-12-12
DadanielZ merged pull request #: URL: https://github.com/apache/hadoop/pull/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] DadanielZ commented on pull request #2222: HADOOP-16966. ABFS: Upgrade store REST API version to 2019-12-12
DadanielZ commented on pull request #: URL: https://github.com/apache/hadoop/pull/#issuecomment-675005126 looks good, +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2231: HADOOP-15691 Add PathCapabilities to FileSystem and FileContext.
hadoop-yetus commented on pull request #2231: URL: https://github.com/apache/hadoop/pull/2231#issuecomment-674888089 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 10m 23s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 7 new or modified test files. | ||| _ branch-3.2 Compile Tests _ | | +0 :ok: | mvndep | 3m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 35s | branch-3.2 passed | | +1 :green_heart: | compile | 14m 19s | branch-3.2 passed | | +1 :green_heart: | checkstyle | 2m 44s | branch-3.2 passed | | +1 :green_heart: | mvnsite | 5m 14s | branch-3.2 passed | | +1 :green_heart: | shadedclient | 21m 47s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 17s | branch-3.2 passed | | +0 :ok: | spotbugs | 0m 47s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 7m 44s | branch-3.2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 31s | the patch passed | | +1 :green_heart: | compile | 13m 38s | the patch passed | | +1 :green_heart: | javac | 13m 38s | the patch passed | | -0 :warning: | checkstyle | 2m 41s | root: The patch generated 12 new + 594 unchanged - 1 fixed = 606 total (was 595) | | +1 :green_heart: | mvnsite | 5m 9s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 12m 45s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 18s | the patch passed | | +1 :green_heart: | findbugs | 8m 28s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 0s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 58s | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 4m 52s | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | unit | 4m 45s | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 1m 37s | hadoop-azure in the patch passed. | | +1 :green_heart: | unit | 1m 0s | hadoop-azure-datalake in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | The patch does not generate ASF License warnings. | | | | 164m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2231 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 8011ac3de4b2 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.2 / 9aa78fe | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~16.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/1/artifact/out/diff-checkstyle-root.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/1/testReport/ | | Max. process+thread count | 1393 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure hadoop-tools/hadoop-azure-datalake U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2231/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[jira] [Updated] (HADOOP-17211) ABFS : Add support for authentication mechanism at container level
[ https://issues.apache.org/jira/browse/HADOOP-17211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oasis K updated HADOOP-17211: - Summary: ABFS : Add support for authentication mechanism at container level (was: ABFS : Add support using various authentication mechanisms at container level) > ABFS : Add support for authentication mechanism at container level > -- > > Key: HADOOP-17211 > URL: https://issues.apache.org/jira/browse/HADOOP-17211 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Oasis K >Priority: Major > > ABFS supports using various authentication mechanisms at storage level. > As part of access level policies , application might have access to various > containers belonging to same storage account using different authentication > mechanisms. For one container application can have read, write etc , for > other it may have only read permission. Application should be able to > leverage authentication mechanisms for same storage account at container > level. > One should be able to use SAS as authentication mechanism for one container & > OAuth as authentication mechanism for another container. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17211) ABFS : Add support using various authentication mechanisms at container level
Oasis K created HADOOP-17211: Summary: ABFS : Add support using various authentication mechanisms at container level Key: HADOOP-17211 URL: https://issues.apache.org/jira/browse/HADOOP-17211 Project: Hadoop Common Issue Type: Improvement Components: fs/azure Affects Versions: 3.3.0 Reporter: Oasis K ABFS supports using various authentication mechanisms at storage level. As part of access level policies , application might have access to various containers belonging to same storage account using different authentication mechanisms. For one container application can have read, write etc , for other it may have only read permission. Application should be able to leverage authentication mechanisms for same storage account at container level. One should be able to use SAS as authentication mechanism for one container & OAuth as authentication mechanism for another container. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #2231: HADOOP-15691 Add PathCapabilities to FileSystem and FileContext.
steveloughran opened a new pull request #2231: URL: https://github.com/apache/hadoop/pull/2231 This is the backport to branch-3.2 see: [https://issues.apache.org/jira/browse/HADOOP-17210)(https://issues.apache.org/jira/browse/HADOOP-17210) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-17210) backport HADOOP-15691 PathCapabilities API to branch-3.2
[ https://issues.apache.org/jira/browse/HADOOP-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-17210 started by Steve Loughran. --- > backport HADOOP-15691 PathCapabilities API to branch-3.2 > > > Key: HADOOP-17210 > URL: https://issues.apache.org/jira/browse/HADOOP-17210 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, fs/s3 >Affects Versions: 3.2.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Backport the PathCapabilities API of HADOOP-15691 as a precursor to the > HADOOP-15230 backport, so making it easier to probe FS abilities. > going back further than is may be hard, but we'll see -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17210) backport HADOOP-15691 PathCapabilities API to branch-3.2
Steve Loughran created HADOOP-17210: --- Summary: backport HADOOP-15691 PathCapabilities API to branch-3.2 Key: HADOOP-17210 URL: https://issues.apache.org/jira/browse/HADOOP-17210 Project: Hadoop Common Issue Type: New Feature Components: fs, fs/s3 Affects Versions: 3.2.1 Reporter: Steve Loughran Assignee: Steve Loughran Backport the PathCapabilities API of HADOOP-15691 as a precursor to the HADOOP-15230 backport, so making it easier to probe FS abilities. going back further than is may be hard, but we'll see -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #1833: HADOOP-16493. S3AFilesystem.initiateRename() shared parents
steveloughran commented on pull request #1833: URL: https://github.com/apache/hadoop/pull/1833#issuecomment-674786155 superceded This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16493) S3AFilesystem.initiateRename() can skip check on dest.parent status if src has same parent
[ https://issues.apache.org/jira/browse/HADOOP-16493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16493. - Resolution: Duplicate Ended up in HADOOP-13230. Not sure if I did that deliberately or not :) > S3AFilesystem.initiateRename() can skip check on dest.parent status if src > has same parent > -- > > Key: HADOOP-16493 > URL: https://issues.apache.org/jira/browse/HADOOP-16493 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > Speedup inferred from debug logs (probably not a regression from > HADOOP-15183, more something we'd not noticed). > There's a check in {{initiateRename()}} to make sure the parent dir of the > dest exists. > If dest.getParent() is src.getParent() (i.e. a same dire rename) or is any > other ancestor, we don't need those HEAD/LIST requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1833: HADOOP-16493. S3AFilesystem.initiateRename() shared parents
steveloughran closed pull request #1833: URL: https://github.com/apache/hadoop/pull/1833 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2229: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside.
umamaheswararao commented on a change in pull request #2229: URL: https://github.com/apache/hadoop/pull/2229#discussion_r471345598 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java ## @@ -0,0 +1,1864 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.HadoopIllegalArgumentException; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.CacheFlag; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileEncryptionInfo; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsServerDefaults; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.PartialListing; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.PathHandle; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.viewfs.ViewFileSystem; +import org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse; +import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo; +import org.apache.hadoop.hdfs.protocol.CachePoolEntry; +import org.apache.hadoop.hdfs.protocol.CachePoolInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult; +import org.apache.hadoop.hdfs.protocol.EncryptionZone; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.protocol.HdfsPathHandle; +import org.apache.hadoop.hdfs.protocol.OpenFileEntry; +import org.apache.hadoop.hdfs.protocol.OpenFilesIterator; +import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; +import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; +import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; +import org.apache.hadoop.security.AccessControlException; +import org.apache.hadoop.security.token.DelegationTokenIssuer; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.util.Progressable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.FileNotFoundException; +import java.io.IOException; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; + +/** + * The ViewDistributedFileSystem is an extended class to DistributedFileSystem + * with additional mounting functionality. The goal is to have better API + * compatibility for HDFS users when using mounting + * filesystem(ViewFileSystemOverloadScheme). + * The ViewFileSystemOverloadScheme{@link ViewFileSystemOverloadScheme} is a new + * filesystem with
[jira] [Commented] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy
[ https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178859#comment-17178859 ] Hadoop QA commented on HADOOP-17122: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 47s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 1 new + 41 unchanged - 1 fixed = 42 total (was 42) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 24s{color} | {color:green} hadoop-distcp in the patch
[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2229: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside.
umamaheswararao commented on a change in pull request #2229: URL: https://github.com/apache/hadoop/pull/2229#discussion_r471345598 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java ## @@ -0,0 +1,1864 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.HadoopIllegalArgumentException; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.CacheFlag; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileEncryptionInfo; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsServerDefaults; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.PartialListing; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.PathHandle; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.viewfs.ViewFileSystem; +import org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse; +import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo; +import org.apache.hadoop.hdfs.protocol.CachePoolEntry; +import org.apache.hadoop.hdfs.protocol.CachePoolInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult; +import org.apache.hadoop.hdfs.protocol.EncryptionZone; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.protocol.HdfsPathHandle; +import org.apache.hadoop.hdfs.protocol.OpenFileEntry; +import org.apache.hadoop.hdfs.protocol.OpenFilesIterator; +import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; +import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; +import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; +import org.apache.hadoop.security.AccessControlException; +import org.apache.hadoop.security.token.DelegationTokenIssuer; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.util.Progressable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.FileNotFoundException; +import java.io.IOException; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; + +/** + * The ViewDistributedFileSystem is an extended class to DistributedFileSystem + * with additional mounting functionality. The goal is to have better API + * compatibility for HDFS users when using mounting + * filesystem(ViewFileSystemOverloadScheme). + * The ViewFileSystemOverloadScheme{@link ViewFileSystemOverloadScheme} is a new + * filesystem with
[GitHub] [hadoop] aajisaka commented on a change in pull request #2226: HADOOP-17205. Move personality file from Yetus to Hadoop repository
aajisaka commented on a change in pull request #2226: URL: https://github.com/apache/hadoop/pull/2226#discussion_r471319019 ## File path: Jenkinsfile ## @@ -96,8 +96,7 @@ pipeline { YETUS_ARGS+=("--basedir=${WORKSPACE}/${SOURCEDIR}") # our project defaults come from a personality file -# which will get loaded automatically by setting the project name -YETUS_ARGS+=("--project=hadoop") Review comment: Would you add `"--project=hadoop"`? https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/2/console The project name become 'unknown' and the name used in many places. I'm +1 if that is addressed. Thank you @sunchao This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] swamirishi commented on pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy
swamirishi commented on pull request #2133: URL: https://github.com/apache/hadoop/pull/2133#issuecomment-674734152 @steveloughran I have made the required changes. Can the code be merged? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #2229: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside.
ayushtkn commented on a change in pull request #2229: URL: https://github.com/apache/hadoop/pull/2229#discussion_r471313374 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java ## @@ -0,0 +1,1864 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.HadoopIllegalArgumentException; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.CacheFlag; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileEncryptionInfo; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsServerDefaults; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.PartialListing; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.PathHandle; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.viewfs.ViewFileSystem; +import org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse; +import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo; +import org.apache.hadoop.hdfs.protocol.CachePoolEntry; +import org.apache.hadoop.hdfs.protocol.CachePoolInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult; +import org.apache.hadoop.hdfs.protocol.EncryptionZone; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.protocol.HdfsPathHandle; +import org.apache.hadoop.hdfs.protocol.OpenFileEntry; +import org.apache.hadoop.hdfs.protocol.OpenFilesIterator; +import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; +import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; +import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; +import org.apache.hadoop.security.AccessControlException; +import org.apache.hadoop.security.token.DelegationTokenIssuer; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.util.Progressable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.FileNotFoundException; +import java.io.IOException; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; + +/** + * The ViewDistributedFileSystem is an extended class to DistributedFileSystem + * with additional mounting functionality. The goal is to have better API + * compatibility for HDFS users when using mounting + * filesystem(ViewFileSystemOverloadScheme). + * The ViewFileSystemOverloadScheme{@link ViewFileSystemOverloadScheme} is a new + * filesystem with inherited
[GitHub] [hadoop] hadoop-yetus commented on pull request #2226: HADOOP-17205. Move personality file from Yetus to Hadoop repository
hadoop-yetus commented on pull request #2226: URL: https://github.com/apache/hadoop/pull/2226#issuecomment-674725445 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 2s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 47s | trunk passed | | +1 :green_heart: | mvnsite | 20m 19s | trunk passed | | +1 :green_heart: | shadedclient | 14m 19s | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 9s | the patch passed | | +1 :green_heart: | mvnsite | 17m 40s | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 18s | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 0s | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 19m 46s | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 2s | The patch does not generate ASF License warnings. | | | | 138m 26s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2226 | | Optional Tests | dupname asflicense shellcheck shelldocs mvnsite unit | | uname | Linux c46257755145 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5092ea62ecb | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/2/testReport/ | | Max. process+thread count | 477 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2226/2/console | | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17178798#comment-17178798 ] Hemanth Boyina commented on HADOOP-17144: - [~aajisaka] [~iwasakims] can you please review the patch > Update Hadoop's lz4 to v1.9.2 > - > > Key: HADOOP-17144 > URL: https://issues.apache.org/jira/browse/HADOOP-17144 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HADOOP-17144.001.patch, HADOOP-17144.002.patch, > HADOOP-17144.003.patch > > > Update hadoop's native lz4 to v1.9.2 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy
[ https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaminathan Balachandran updated HADOOP-17122: -- Attachment: (was: HADOOP-17122.001.patch) > Bug in preserving Directory Attributes in DistCp with Atomic Copy > - > > Key: HADOOP-17122 > URL: https://issues.apache.org/jira/browse/HADOOP-17122 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.1.2, 3.2.1 >Reporter: Swaminathan Balachandran >Priority: Major > Attachments: HADOOP-17122.001.patch, Screenshot 2020-07-11 at > 10.26.30 AM.png > > > Description: > In case of Atomic Copy the copied data is commited and post that the preserve > directory attributes runs. Preserving directory attributes is done over work > path and not final path. I have fixed the base directory to point towards > final path. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy
[ https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaminathan Balachandran updated HADOOP-17122: -- Status: Open (was: Patch Available) > Bug in preserving Directory Attributes in DistCp with Atomic Copy > - > > Key: HADOOP-17122 > URL: https://issues.apache.org/jira/browse/HADOOP-17122 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.1, 3.1.2 >Reporter: Swaminathan Balachandran >Priority: Major > Attachments: HADOOP-17122.001.patch, HADOOP-17122.001.patch, > Screenshot 2020-07-11 at 10.26.30 AM.png > > > Description: > In case of Atomic Copy the copied data is commited and post that the preserve > directory attributes runs. Preserving directory attributes is done over work > path and not final path. I have fixed the base directory to point towards > final path. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17122) Bug in preserving Directory Attributes in DistCp with Atomic Copy
[ https://issues.apache.org/jira/browse/HADOOP-17122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaminathan Balachandran updated HADOOP-17122: -- Attachment: HADOOP-17122.001.patch Status: Patch Available (was: Open) > Bug in preserving Directory Attributes in DistCp with Atomic Copy > - > > Key: HADOOP-17122 > URL: https://issues.apache.org/jira/browse/HADOOP-17122 > Project: Hadoop Common > Issue Type: Bug > Components: tools/distcp >Affects Versions: 3.2.1, 3.1.2 >Reporter: Swaminathan Balachandran >Priority: Major > Attachments: HADOOP-17122.001.patch, HADOOP-17122.001.patch, > Screenshot 2020-07-11 at 10.26.30 AM.png > > > Description: > In case of Atomic Copy the copied data is commited and post that the preserve > directory attributes runs. Preserving directory attributes is done over work > path and not final path. I have fixed the base directory to point towards > final path. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hemanthboyina commented on pull request #2172: HDFS-15483. Ordered snapshot deletion: Disallow rename between two snapshottable directories.
hemanthboyina commented on pull request #2172: URL: https://github.com/apache/hadoop/pull/2172#issuecomment-674712845 thanks @bshashikant for the contribution , thanks @szetszwo @mukul1987 for the reveiw This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hemanthboyina commented on pull request #2172: HDFS-15483. Ordered snapshot deletion: Disallow rename between two snapshottable directories.
hemanthboyina commented on pull request #2172: URL: https://github.com/apache/hadoop/pull/2172#issuecomment-674711607 +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hemanthboyina merged pull request #2172: HDFS-15483. Ordered snapshot deletion: Disallow rename between two snapshottable directories.
hemanthboyina merged pull request #2172: URL: https://github.com/apache/hadoop/pull/2172 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huangtianhua commented on pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS
huangtianhua commented on pull request #2189: URL: https://github.com/apache/hadoop/pull/2189#issuecomment-674689358 @liuml07 Hi, would you please help to review it then we will modify this ASAP, thanks very much:) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2229: HDFS-15533: Provide DFS API compatible class, but use ViewFileSystemOverloadScheme inside.
umamaheswararao commented on a change in pull request #2229: URL: https://github.com/apache/hadoop/pull/2229#discussion_r471257416 ## File path: hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java ## @@ -0,0 +1,1864 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs; + +import com.google.common.base.Preconditions; +import org.apache.hadoop.HadoopIllegalArgumentException; +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.crypto.key.KeyProvider; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.CacheFlag; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileEncryptionInfo; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsServerDefaults; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.PartialListing; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.PathHandle; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.fs.viewfs.ViewFileSystem; +import org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse; +import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry; +import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo; +import org.apache.hadoop.hdfs.protocol.CachePoolEntry; +import org.apache.hadoop.hdfs.protocol.CachePoolInfo; +import org.apache.hadoop.hdfs.protocol.DatanodeInfo; +import org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult; +import org.apache.hadoop.hdfs.protocol.EncryptionZone; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy; +import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; +import org.apache.hadoop.hdfs.protocol.HdfsPathHandle; +import org.apache.hadoop.hdfs.protocol.OpenFileEntry; +import org.apache.hadoop.hdfs.protocol.OpenFilesIterator; +import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport; +import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing; +import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus; +import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus; +import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier; +import org.apache.hadoop.security.AccessControlException; +import org.apache.hadoop.security.token.DelegationTokenIssuer; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.util.Progressable; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.FileNotFoundException; +import java.io.IOException; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collection; +import java.util.EnumSet; +import java.util.List; +import java.util.Map; + +/** + * The ViewDistributedFileSystem is an extended class to DistributedFileSystem + * with additional mounting functionality. The goal is to have better API + * compatibility for HDFS users when using mounting + * filesystem(ViewFileSystemOverloadScheme). + * The ViewFileSystemOverloadScheme{@link ViewFileSystemOverloadScheme} is a new + * filesystem with