[GitHub] [hadoop] hadoop-yetus commented on pull request #2200: MAPREDUCE-7290. ShuffleHeader should be compatible between client…
hadoop-yetus commented on pull request #2200: URL: https://github.com/apache/hadoop/pull/2200#issuecomment-670828565 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 10m 53s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 53s | trunk passed | | +1 :green_heart: | compile | 2m 15s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 1m 55s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 48s | trunk passed | | +1 :green_heart: | mvnsite | 1m 10s | trunk passed | | +1 :green_heart: | shadedclient | 15m 51s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 50s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 48s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 41s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 52s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 55s | the patch passed | | +1 :green_heart: | compile | 2m 7s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 2m 6s | the patch passed | | +1 :green_heart: | compile | 1m 51s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 1m 51s | the patch passed | | -0 :warning: | checkstyle | 0m 39s | hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 51 new + 196 unchanged - 2 fixed = 247 total (was 198) | | +1 :green_heart: | mvnsite | 0m 53s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 42s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 43s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 40s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | findbugs | 1m 18s | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 6m 46s | hadoop-mapreduce-client-core in the patch passed. | | -1 :x: | unit | 2m 31s | hadoop-mapreduce-client-shuffle in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 97m 44s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core | | | Method org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader.readByVersion(DataInput) seems to be useless At ShuffleHeader.java:useless At ShuffleHeader.java:[line 124] | | | Method org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader.writeByVersion(DataOutput) seems to be useless At ShuffleHeader.java:useless At ShuffleHeader.java:[line 147] | | | org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader$HeaderVersion defines compareTo(ShuffleHeader$HeaderVersion) and uses Object.equals() At ShuffleHeader.java:Object.equals() At ShuffleHeader.java:[lines 331-349] | | Failed junit tests | hadoop.mapred.TestShuffleHandler | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2200/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2200 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 440c07ae1b01 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 64753addba9 |
[GitHub] [hadoop] swamirishi commented on pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy
swamirishi commented on pull request #2133: URL: https://github.com/apache/hadoop/pull/2133#issuecomment-670827651 @steveloughran Did you take a look at this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2203: HDFS-15520 Use visitor pattern to visit namespace tree
hadoop-yetus commented on pull request #2203: URL: https://github.com/apache/hadoop/pull/2203#issuecomment-670825716 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 20s | trunk passed | | +1 :green_heart: | compile | 1m 16s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 1m 9s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 48s | trunk passed | | +1 :green_heart: | mvnsite | 1m 20s | trunk passed | | +1 :green_heart: | shadedclient | 15m 40s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 20s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 3m 3s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 0s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | the patch passed | | +1 :green_heart: | compile | 1m 8s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 1m 8s | the patch passed | | +1 :green_heart: | compile | 1m 4s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 1m 4s | the patch passed | | -0 :warning: | checkstyle | 0m 42s | hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 137 unchanged - 0 fixed = 143 total (was 137) | | +1 :green_heart: | mvnsite | 1m 8s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 37s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 17s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 3m 2s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 123m 2s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | The patch does not generate ASF License warnings. | | | | 204m 38s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.hdfs.TestGetFileChecksum | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2203/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2203 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7b4a3596aee9 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 64753addba9 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2203/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2203/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2203/1/testReport/ | | Max. process+thread count | 3789 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HADOOP-17068) client fails forever when namenode ipaddr changed
[ https://issues.apache.org/jira/browse/HADOOP-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173544#comment-17173544 ] zhenzhao wang commented on HADOOP-17068: We had seen the problem multiple times too. One workaround we had been using for years is to increase dfs.client.failover.connection.retries.on.timeouts to 3. It will help in the previous HDFS client version. > client fails forever when namenode ipaddr changed > - > > Key: HADOOP-17068 > URL: https://issues.apache.org/jira/browse/HADOOP-17068 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Reporter: Sean Chow >Assignee: Sean Chow >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17068.001.patch, HDFS-15390.01.patch > > > For machine replacement, I replace my standby namenode with a new ipaddr and > keep the same hostname. Also update the client's hosts to make it resolve > correctly > When I try to run failover to transite the new namenode(let's say nn2), the > client will fail to read or write forever until it's restarted. > That make yarn nodemanager in sick state. Even the new tasks will encounter > this exception too. Until all nodemanager restart. > > {code:java} > 20/06/02 15:12:25 WARN ipc.Client: Address change detected. Old: > nn2-192-168-1-100/192.168.1.100:9000 New: nn2-192-168-1-100/192.168.1.200:9000 > 20/06/02 15:12:25 DEBUG ipc.Client: closing ipc connection to > nn2-192-168-1-100/192.168.1.200:9000: Connection refused > java.net.ConnectException: Connection refused > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > at > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) > at > org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) > at > org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:608) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707) > at > org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1517) > at org.apache.hadoop.ipc.Client.call(Client.java:1440) > at org.apache.hadoop.ipc.Client.call(Client.java:1401) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399) > at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:193) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > {code} > > We can see the client has {{Address change detected}}, but it still fails. I > find out that's because when method {{updateAddress()}} return true, the > {{handleConnectionFailure()}} thow an exception that break the next retry > with the right ipaddr. > Client.java: setupConnection() > {code:java} > } catch (ConnectTimeoutException toe) { > /* Check for an address change and update the local reference. >* Reset the failure counter if the address was changed >*/ > if (updateAddress()) { > timeoutFailures = ioFailures = 0; > } > handleConnectionTimeout(timeoutFailures++, > maxRetriesOnSocketTimeouts, toe); > } catch (IOException ie) { > if (updateAddress()) { > timeoutFailures = ioFailures = 0; > } > // because the namenode ip changed in updateAddress(), the old namenode > ipaddress cannot be accessed now > // handleConnectionFailure will thow an exception, the next retry never have > a chance to use the right server updated in updateAddress() > handleConnectionFailure(ioFailures++, ie); > } > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#issuecomment-670807166 The UTs failure are not related to the change. Here're some references: https://issues.apache.org/jira/browse/HADOOP-15891 Design doc: https://issues.apache.org/jira/secure/attachment/12946315/HDFS-13948_%20Regex%20Link%20Type%20In%20Mount%20Table-v1.pdf This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2
[ https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173539#comment-17173539 ] Hadoop QA commented on HADOOP-17144: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 3s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 20s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 40s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 20m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 36m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 32s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 1s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 31m 4s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 33m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 3s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 19m 4s{color} | {color:red} root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 46 new + 125 unchanged - 37 fixed = 171 total (was 162) {color} | | {color:green}+1{color} | {color:green} golang {color} | {color:green} 19m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 35s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 16m 35s{color} | {color:red} root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 41 new + 130 unchanged - 32 fixed = 171 total (was 162) {color} | | {color:green}+1{color} | {color:green} golang {color} | {color:green} 16m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 18m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with
[GitHub] [hadoop] hadoop-yetus commented on pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory
hadoop-yetus commented on pull request #2176: URL: https://github.com/apache/hadoop/pull/2176#issuecomment-670804682 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | buf | 0m 0s | buf was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 42s | trunk passed | | +1 :green_heart: | compile | 19m 26s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 51s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 3m 7s | trunk passed | | +1 :green_heart: | mvnsite | 4m 5s | trunk passed | | +1 :green_heart: | shadedclient | 21m 42s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 53s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 4m 0s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 2m 35s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 7m 50s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 45s | the patch passed | | +1 :green_heart: | compile | 18m 45s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | -1 :x: | cc | 18m 45s | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 11 new + 151 unchanged - 11 fixed = 162 total (was 162) | | +1 :green_heart: | javac | 18m 45s | the patch passed | | +1 :green_heart: | compile | 16m 46s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | cc | 16m 46s | root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 25 new + 137 unchanged - 25 fixed = 162 total (was 162) | | +1 :green_heart: | javac | 16m 46s | the patch passed | | -0 :warning: | checkstyle | 2m 59s | root: The patch generated 1 new + 826 unchanged - 0 fixed = 827 total (was 826) | | +1 :green_heart: | mvnsite | 4m 6s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 1s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 53s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 4m 0s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 8m 22s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 23s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 20s | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 114m 52s | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 7s | The patch does not generate ASF License warnings. | | | | 309m 49s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSStorageStateRecovery | | | hadoop.hdfs.server.namenode.TestFSEditLogLoader | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor | | | hadoop.hdfs.server.namenode.TestQuotaByStorageType | | | hadoop.hdfs.server.namenode.ha.TestStandbyInProgressTail | | | hadoop.hdfs.server.namenode.TestFSDirectory | | | hadoop.hdfs.server.namenode.TestFsck | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | | | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier | | | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | |
[GitHub] [hadoop] szetszwo opened a new pull request #2203: HDFS-15520 Use visitor pattern to visit namespace tree
szetszwo opened a new pull request #2203: URL: https://github.com/apache/hadoop/pull/2203 https://issues.apache.org/jira/browse/HDFS-15520 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory
smengcl commented on pull request #2176: URL: https://github.com/apache/hadoop/pull/2176#issuecomment-670697255 I rebased the commits on to the latest trunk. Also made `dfs.namenode.snapshot.trashroot.enabled` a private config in FSNameSystem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dbtsai commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
dbtsai commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-670685339 @hemanthboyina The performance should be similar since the java-snappy also uses native lib that is bundled in the jar file. I don't benchmark yet, and I will try to find time to do it. If possible, feel free to do the testing and post the result back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2069: HADOOP-16830. IOStatistics API.
hadoop-yetus commented on pull request #2069: URL: https://github.com/apache/hadoop/pull/2069#issuecomment-670674319 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 34 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 4s | trunk passed | | +1 :green_heart: | compile | 20m 48s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 17m 18s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 53s | trunk passed | | +1 :green_heart: | mvnsite | 3m 12s | trunk passed | | +1 :green_heart: | shadedclient | 20m 14s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 41s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 39s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 15s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 50s | trunk passed | | -0 :warning: | patch | 1m 36s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 0s | the patch passed | | +1 :green_heart: | compile | 19m 32s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 19m 32s | root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2050 unchanged - 1 fixed = 2050 total (was 2051) | | +1 :green_heart: | compile | 17m 25s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 17m 25s | root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1943 unchanged - 1 fixed = 1943 total (was 1944) | | -0 :warning: | checkstyle | 2m 43s | root: The patch generated 7 new + 241 unchanged - 25 fixed = 248 total (was 266) | | +1 :green_heart: | mvnsite | 3m 14s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 14 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 42s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 46s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 37s | hadoop-common in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 35s | hadoop-mapreduce-client-core in the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. | | +1 :green_heart: | javadoc | 0m 42s | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 0 unchanged - 4 fixed = 0 total (was 4) | | -1 :x: | findbugs | 2m 35s | hadoop-common-project/hadoop-common generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 37s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 7m 7s | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 1m 37s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | The patch does not generate ASF License warnings. | | | | 191m 51s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-common-project/hadoop-common | | | Inconsistent synchronization of org.apache.hadoop.fs.statistics.MeanStatistic.samples; locked 68% of time Unsynchronized
[GitHub] [hadoop] smengcl commented on a change in pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory
smengcl commented on a change in pull request #2176: URL: https://github.com/apache/hadoop/pull/2176#discussion_r467222131 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java ## @@ -516,6 +516,11 @@ public static final int DFS_NAMENODE_SNAPSHOT_SKIPLIST_MAX_SKIP_LEVELS_DEFAULT = 0; + public static final String DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED = + "dfs.namenode.snapshot.trashroot.enabled"; + public static final boolean DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED_DEFAULT = Review comment: Got it. I will put it in `FSNameSystem` as private config then as it is used there, similar to HDFS-15481 [did](https://github.com/apache/hadoop/blob/e072d33327b8f5d38b74a15e279d492ad379a47c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java#L88-L96). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liuml07 commented on pull request #2190: HADOOP-17182. Remove breadcrumbs from web site
liuml07 commented on pull request #2190: URL: https://github.com/apache/hadoop/pull/2190#issuecomment-670667507 Thank you @ayushtkn Yes that's what I was asking. Since we still have the information clearly in the page, I guess the change to remove the dead link makes more sense. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn edited a comment on pull request #2190: HADOOP-17182. Remove breadcrumbs from web site
ayushtkn edited a comment on pull request #2190: URL: https://github.com/apache/hadoop/pull/2190#issuecomment-670654455 `After removing it, when we click one left menu, is it straightforward to tell the current version from the page/header?` Do you mean to say, the Hadoop Version? If so, It would be still there on the right side. ![image](https://user-images.githubusercontent.com/25608848/89677670-7dffb880-d90b-11ea-9d0b-62c0af838994.png) Can't say how much prominent it is This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #2190: HADOOP-17182. Remove breadcrumbs from web site
ayushtkn commented on pull request #2190: URL: https://github.com/apache/hadoop/pull/2190#issuecomment-670654455 `After removing it, when we click one left menu, is it straightforward to tell the current version from the page/header?` Do you mean to say, the Hadoop Version? If so, It would be still there on the right side. ![image](https://user-images.githubusercontent.com/25608848/89677414-092c7e80-d90b-11ea-8b08-0c7d9fe27691.png Can't say how much prominent it is This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2200: MAPREDUCE-7290. ShuffleHeader should be compatible between client…
hadoop-yetus commented on pull request #2200: URL: https://github.com/apache/hadoop/pull/2200#issuecomment-670649392 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 20s | Maven dependency ordering for branch | | -1 :x: | mvninstall | 0m 52s | root in trunk failed. | | -1 :x: | compile | 0m 22s | hadoop-mapreduce-client in trunk failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | compile | 0m 23s | hadoop-mapreduce-client in trunk failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -0 :warning: | checkstyle | 0m 20s | The patch fails to run checkstyle in hadoop-mapreduce-client | | -1 :x: | mvnsite | 0m 22s | hadoop-mapreduce-client-core in trunk failed. | | -1 :x: | mvnsite | 0m 22s | hadoop-mapreduce-client-shuffle in trunk failed. | | +1 :green_heart: | shadedclient | 1m 29s | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 1m 0s | hadoop-mapreduce-client-core in trunk failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javadoc | 0m 22s | hadoop-mapreduce-client-shuffle in trunk failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javadoc | 0m 31s | hadoop-mapreduce-client-core in trunk failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | javadoc | 0m 23s | hadoop-mapreduce-client-shuffle in trunk failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | +0 :ok: | spotbugs | 4m 32s | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 :x: | findbugs | 0m 23s | hadoop-mapreduce-client-core in trunk failed. | | -1 :x: | findbugs | 0m 23s | hadoop-mapreduce-client-shuffle in trunk failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 5s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 11s | hadoop-mapreduce-client-core in the patch failed. | | -1 :x: | mvninstall | 0m 13s | hadoop-mapreduce-client-shuffle in the patch failed. | | -1 :x: | compile | 0m 12s | hadoop-mapreduce-client in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javac | 0m 12s | hadoop-mapreduce-client in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | compile | 0m 11s | hadoop-mapreduce-client in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | javac | 0m 11s | hadoop-mapreduce-client in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -0 :warning: | checkstyle | 0m 10s | The patch fails to run checkstyle in hadoop-mapreduce-client | | -1 :x: | mvnsite | 0m 12s | hadoop-mapreduce-client-core in the patch failed. | | -1 :x: | mvnsite | 0m 14s | hadoop-mapreduce-client-shuffle in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 0m 6s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 14s | hadoop-mapreduce-client-core in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javadoc | 0m 19s | hadoop-mapreduce-client-shuffle in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javadoc | 0m 12s | hadoop-mapreduce-client-core in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | javadoc | 0m 11s | hadoop-mapreduce-client-shuffle in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | findbugs | 0m 10s | hadoop-mapreduce-client-core in the patch failed. | | -1 :x: | findbugs | 0m 11s | hadoop-mapreduce-client-shuffle in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 11s | hadoop-mapreduce-client-core in the patch failed. | | -1 :x: | unit | 0m 11s | hadoop-mapreduce-client-shuffle in the patch failed. | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 18m 58s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40
[GitHub] [hadoop] liuml07 commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class
liuml07 commented on pull request #2197: URL: https://github.com/apache/hadoop/pull/2197#issuecomment-670640493 Thanks @steveloughran I added Arpit and kihwal as reviewers. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on a change in pull request #2176: HDFS-15492. Make trash root inside each snapshottable directory
smengcl commented on a change in pull request #2176: URL: https://github.com/apache/hadoop/pull/2176#discussion_r467189530 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java ## @@ -2144,4 +2146,293 @@ public void testECCloseCommittedBlock() throws Exception { LambdaTestUtils.intercept(IOException.class, "", () -> str.close()); } } + + @Test + public void testGetTrashRoot() throws IOException { +Configuration conf = getTestConfiguration(); +conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true); +MiniDFSCluster cluster = +new MiniDFSCluster.Builder(conf).numDataNodes(1).build(); +try { + DistributedFileSystem dfs = cluster.getFileSystem(); + Path testDir = new Path("/ssgtr/test1/"); + Path file0path = new Path(testDir, "file-0"); + dfs.create(file0path); + + Path trBeforeAllowSnapshot = dfs.getTrashRoot(file0path); + String trBeforeAllowSnapshotStr = trBeforeAllowSnapshot.toUri().getPath(); + // The trash root should be in user home directory + String homeDirStr = dfs.getHomeDirectory().toUri().getPath(); + assertTrue(trBeforeAllowSnapshotStr.startsWith(homeDirStr)); + + dfs.allowSnapshot(testDir); + + Path trAfterAllowSnapshot = dfs.getTrashRoot(file0path); + String trAfterAllowSnapshotStr = trAfterAllowSnapshot.toUri().getPath(); + // The trash root should now be in the snapshot root + String testDirStr = testDir.toUri().getPath(); + assertTrue(trAfterAllowSnapshotStr.startsWith(testDirStr)); + + // Cleanup + dfs.disallowSnapshot(testDir); + dfs.delete(testDir, true); +} finally { + if (cluster != null) { +cluster.shutdown(); + } +} + } + + private boolean isPathInUserHome(String pathStr, DistributedFileSystem dfs) { +String homeDirStr = dfs.getHomeDirectory().toUri().getPath(); +return pathStr.startsWith(homeDirStr); + } + + @Test + public void testGetTrashRoots() throws IOException { +Configuration conf = getTestConfiguration(); +conf.setBoolean(DFS_NAMENODE_SNAPSHOT_TRASHROOT_ENABLED, true); +MiniDFSCluster cluster = +new MiniDFSCluster.Builder(conf).numDataNodes(1).build(); +try { + DistributedFileSystem dfs = cluster.getFileSystem(); + Path testDir = new Path("/ssgtr/test1/"); + Path file0path = new Path(testDir, "file-0"); + dfs.create(file0path); + // Create user trash + Path currUserHome = dfs.getHomeDirectory(); + Path currUserTrash = new Path(currUserHome, FileSystem.TRASH_PREFIX); + dfs.mkdirs(currUserTrash); + // Create trash inside test directory + Path testDirTrash = new Path(testDir, FileSystem.TRASH_PREFIX); + Path testDirTrashCurrUser = new Path(testDirTrash, + UserGroupInformation.getCurrentUser().getShortUserName()); + dfs.mkdirs(testDirTrashCurrUser); + + Collection trashRoots = dfs.getTrashRoots(false); + // getTrashRoots should only return 1 empty user trash in the home dir now + assertEquals(1, trashRoots.size()); + FileStatus firstFileStatus = trashRoots.iterator().next(); + String pathStr = firstFileStatus.getPath().toUri().getPath(); + assertTrue(isPathInUserHome(pathStr, dfs)); + // allUsers should not make a difference for now because we have one user + Collection trashRootsAllUsers = dfs.getTrashRoots(true); + assertEquals(trashRoots, trashRootsAllUsers); + + dfs.allowSnapshot(testDir); + + Collection trashRootsAfter = dfs.getTrashRoots(false); + // getTrashRoots should return 1 more trash root inside snapshottable dir + assertEquals(trashRoots.size() + 1, trashRootsAfter.size()); + boolean foundUserHomeTrash = false; + boolean foundSnapDirUserTrash = false; + String testDirStr = testDir.toUri().getPath(); + for (FileStatus fileStatus : trashRootsAfter) { +String currPathStr = fileStatus.getPath().toUri().getPath(); +if (isPathInUserHome(currPathStr, dfs)) { + foundUserHomeTrash = true; +} else if (currPathStr.startsWith(testDirStr)) { + foundSnapDirUserTrash = true; +} + } + assertTrue(foundUserHomeTrash); + assertTrue(foundSnapDirUserTrash); + // allUsers should not make a difference for now because we have one user + Collection trashRootsAfterAllUsers = dfs.getTrashRoots(true); + assertEquals(trashRootsAfter, trashRootsAfterAllUsers); + + // Create trash root for user0 + UserGroupInformation ugi = UserGroupInformation.createRemoteUser("user0"); + String user0HomeStr = DFSUtilClient.getHomeDirectory(conf, ugi); + Path user0Trash = new Path(user0HomeStr, FileSystem.TRASH_PREFIX); + dfs.mkdirs(user0Trash); + // allUsers flag set to false should be unaffected + Collection trashRootsAfter2 = dfs.getTrashRoots(false); +
[jira] [Commented] (HADOOP-11219) Upgrade to netty 4
[ https://issues.apache.org/jira/browse/HADOOP-11219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173383#comment-17173383 ] Kevin Risden commented on HADOOP-11219: --- https://snyk.io/vuln/SNYK-JAVA-IONETTY-473694 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869 https://github.com/netty/netty/issues/9571#issuecomment-552070089 There is at least one CVE affecting Netty <4 - Ironically this was published around the same week as [~weichiu]'s comment. > Upgrade to netty 4 > -- > > Key: HADOOP-11219 > URL: https://issues.apache.org/jira/browse/HADOOP-11219 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Haohui Mai >Assignee: Haohui Mai >Priority: Major > > This is an umbrella jira to track the effort of upgrading to Netty 4. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r467176661 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java ## @@ -486,84 +506,113 @@ protected InodeTree(final Configuration config, final String viewName, final UserGroupInformation ugi = UserGroupInformation.getCurrentUser(); for (Entry si : config) { final String key = si.getKey(); - if (key.startsWith(mountTablePrefix)) { -gotMountTableEntry = true; -LinkType linkType; -String src = key.substring(mountTablePrefix.length()); -String settings = null; -if (src.startsWith(linkPrefix)) { - src = src.substring(linkPrefix.length()); - if (src.equals(SlashPath.toString())) { -throw new UnsupportedFileSystemException("Unexpected mount table " -+ "link entry '" + key + "'. Use " -+ Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH + " instead!"); - } - linkType = LinkType.SINGLE; -} else if (src.startsWith(linkFallbackPrefix)) { - if (src.length() != linkFallbackPrefix.length()) { -throw new IOException("ViewFs: Mount points initialization error." + -" Invalid " + Constants.CONFIG_VIEWFS_LINK_FALLBACK + -" entry in config: " + src); - } - linkType = LinkType.SINGLE_FALLBACK; -} else if (src.startsWith(linkMergePrefix)) { // A merge link - src = src.substring(linkMergePrefix.length()); - linkType = LinkType.MERGE; -} else if (src.startsWith(linkMergeSlashPrefix)) { - // This is a LinkMergeSlash entry. This entry should - // not have any additional source path. - if (src.length() != linkMergeSlashPrefix.length()) { -throw new IOException("ViewFs: Mount points initialization error." + -" Invalid " + Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH + -" entry in config: " + src); - } - linkType = LinkType.MERGE_SLASH; -} else if (src.startsWith(Constants.CONFIG_VIEWFS_LINK_NFLY)) { - // prefix.settings.src - src = src.substring(Constants.CONFIG_VIEWFS_LINK_NFLY.length() + 1); - // settings.src - settings = src.substring(0, src.indexOf('.')); - // settings - - // settings.src - src = src.substring(settings.length() + 1); - // src - - linkType = LinkType.NFLY; -} else if (src.startsWith(Constants.CONFIG_VIEWFS_HOMEDIR)) { - // ignore - we set home dir from config - continue; -} else { - throw new IOException("ViewFs: Cannot initialize: Invalid entry in " + - "Mount table in config: " + src); -} + if (!key.startsWith(mountTablePrefix)) { Review comment: It looks complex. But the major change here is simple. Before: if (key.startsWith(mountTablePrefix)) { // hundred line of code } After: if (!key.startsWith(mountTablePrefix)) { continue } // hundred line of code This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JohnZZGithub commented on a change in pull request #2185: HADOOP-15891. provide Regex Based Mount Point In Inode Tree
JohnZZGithub commented on a change in pull request #2185: URL: https://github.com/apache/hadoop/pull/2185#discussion_r467176661 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java ## @@ -486,84 +506,113 @@ protected InodeTree(final Configuration config, final String viewName, final UserGroupInformation ugi = UserGroupInformation.getCurrentUser(); for (Entry si : config) { final String key = si.getKey(); - if (key.startsWith(mountTablePrefix)) { -gotMountTableEntry = true; -LinkType linkType; -String src = key.substring(mountTablePrefix.length()); -String settings = null; -if (src.startsWith(linkPrefix)) { - src = src.substring(linkPrefix.length()); - if (src.equals(SlashPath.toString())) { -throw new UnsupportedFileSystemException("Unexpected mount table " -+ "link entry '" + key + "'. Use " -+ Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH + " instead!"); - } - linkType = LinkType.SINGLE; -} else if (src.startsWith(linkFallbackPrefix)) { - if (src.length() != linkFallbackPrefix.length()) { -throw new IOException("ViewFs: Mount points initialization error." + -" Invalid " + Constants.CONFIG_VIEWFS_LINK_FALLBACK + -" entry in config: " + src); - } - linkType = LinkType.SINGLE_FALLBACK; -} else if (src.startsWith(linkMergePrefix)) { // A merge link - src = src.substring(linkMergePrefix.length()); - linkType = LinkType.MERGE; -} else if (src.startsWith(linkMergeSlashPrefix)) { - // This is a LinkMergeSlash entry. This entry should - // not have any additional source path. - if (src.length() != linkMergeSlashPrefix.length()) { -throw new IOException("ViewFs: Mount points initialization error." + -" Invalid " + Constants.CONFIG_VIEWFS_LINK_MERGE_SLASH + -" entry in config: " + src); - } - linkType = LinkType.MERGE_SLASH; -} else if (src.startsWith(Constants.CONFIG_VIEWFS_LINK_NFLY)) { - // prefix.settings.src - src = src.substring(Constants.CONFIG_VIEWFS_LINK_NFLY.length() + 1); - // settings.src - settings = src.substring(0, src.indexOf('.')); - // settings - - // settings.src - src = src.substring(settings.length() + 1); - // src - - linkType = LinkType.NFLY; -} else if (src.startsWith(Constants.CONFIG_VIEWFS_HOMEDIR)) { - // ignore - we set home dir from config - continue; -} else { - throw new IOException("ViewFs: Cannot initialize: Invalid entry in " + - "Mount table in config: " + src); -} + if (!key.startsWith(mountTablePrefix)) { Review comment: It looks complex. But the change here is simple. Before: if (key.startsWith(mountTablePrefix)) { // hundred line of code } After: if (!key.startsWith(mountTablePrefix)) { continue } // hundred line of code This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java
[ https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173337#comment-17173337 ] Hadoop QA commented on HADOOP-17145: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 25s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 7s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 45s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 17s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 47s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 84 unchanged - 1 fixed = 84 total (was 85) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 13s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 4s{color}
[GitHub] [hadoop] hadoop-yetus commented on pull request #2202: HADOOP-17191. ABFS: Run tests with all AuthTypes
hadoop-yetus commented on pull request #2202: URL: https://github.com/apache/hadoop/pull/2202#issuecomment-670609097 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 25 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 54s | trunk passed | | +1 :green_heart: | compile | 0m 31s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 28s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 25s | trunk passed | | +1 :green_heart: | mvnsite | 0m 33s | trunk passed | | +1 :green_heart: | shadedclient | 14m 32s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 21s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 50s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | | -0 :warning: | patch | 1m 3s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | the patch passed | | +1 :green_heart: | compile | 0m 24s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 24s | the patch passed | | +1 :green_heart: | compile | 0m 21s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 0m 21s | the patch passed | | -0 :warning: | checkstyle | 0m 12s | hadoop-tools/hadoop-azure: The patch generated 4 new + 7 unchanged - 0 fixed = 11 total (was 7) | | +1 :green_heart: | mvnsite | 0m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 28s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 21s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | findbugs | 0m 59s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hadoop-azure in the patch passed. | | -1 :x: | asflicense | 0m 30s | The patch generated 4 ASF License warnings. | | | | 68m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2202 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fc4fb9d33743 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 81da221c757 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/1/testReport/ | | asflicense | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/1/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 446 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2202/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] swamirishi commented on pull request #2133: HADOOP-17122: Preserving Directory Attributes in DistCp with Atomic Copy
swamirishi commented on pull request #2133: URL: https://github.com/apache/hadoop/pull/2133#issuecomment-670593741 Sure Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith opened a new pull request #2202: HADOOP-17191. ABFS: Run tests with all AuthTypes
bilaharith opened a new pull request #2202: URL: https://github.com/apache/hadoop/pull/2202 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hemanthboyina commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hemanthboyina commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-670567445 thanks for the proposal and thanks for the PR have you done any performance test with native snappy and java snappy ? is there any sort of decrease or increase ? and in the PR , you can remove commented code checkins This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java
[ https://issues.apache.org/jira/browse/HADOOP-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor updated HADOOP-17145: -- Attachment: HADOOP-17145.007.patch > Unauthenticated users are not authorized to access this page message is > misleading in HttpServer2.java > -- > > Key: HADOOP-17145 > URL: https://issues.apache.org/jira/browse/HADOOP-17145 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-17145.001.patch, HADOOP-17145.002.patch, > HADOOP-17145.003.patch, HADOOP-17145.004.patch, HADOOP-17145.005.patch, > HADOOP-17145.006.patch, HADOOP-17145.007.patch > > > Recently one of the users were misled by the message "Unauthenticated users > are not authorized to access this page" when the user was not an admin user. > At that point the user is authenticated but has no admin access, so it's > actually not an authentication issue but an authorization issue. > Also, 401 as error code would be better. > Something like "User is unauthorized to access the page" would help to users > to find out what is the problem during access an http endpoint. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2069: HADOOP-16830. IOStatistics API.
steveloughran commented on pull request #2069: URL: https://github.com/apache/hadoop/pull/2069#issuecomment-670536132 FFS. I slightly expand the synchronization of MeanStatistic and now findbugs gets even more upset than before. I'm going to declare all as sync even though it will hurt performance, *just to shut up findbugs* This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2170: HADOOP-1320. Dir Marker getFileStatus() changes backport
hadoop-yetus removed a comment on pull request #2170: URL: https://github.com/apache/hadoop/pull/2170#issuecomment-663656386 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 12m 47s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ branch-3.2 Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 1s | branch-3.2 passed | | +1 :green_heart: | compile | 15m 25s | branch-3.2 passed | | +1 :green_heart: | checkstyle | 2m 26s | branch-3.2 passed | | +1 :green_heart: | mvnsite | 2m 3s | branch-3.2 passed | | +1 :green_heart: | shadedclient | 19m 3s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 45s | branch-3.2 passed | | +0 :ok: | spotbugs | 1m 1s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 56s | branch-3.2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 25s | the patch passed | | +1 :green_heart: | compile | 14m 35s | the patch passed | | +1 :green_heart: | javac | 14m 35s | the patch passed | | -0 :warning: | checkstyle | 2m 26s | root: The patch generated 19 new + 5 unchanged - 0 fixed = 24 total (was 5) | | +1 :green_heart: | mvnsite | 2m 4s | the patch passed | | -1 :x: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 12m 36s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 45s | the patch passed | | +1 :green_heart: | findbugs | 3m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 8m 59s | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 4m 45s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | The patch does not generate ASF License warnings. | | | | 131m 26s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2170 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 92f7e0490731 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.2 / 0fb7c48 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/2/artifact/out/diff-checkstyle-root.txt | | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/2/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/2/testReport/ | | Max. process+thread count | 1385 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-2170/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2197: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class
steveloughran commented on pull request #2197: URL: https://github.com/apache/hadoop/pull/2197#issuecomment-670533722 LGTM, though I'll leave to someone who understands UGI to give the final vote This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
hadoop-yetus commented on pull request #2201: URL: https://github.com/apache/hadoop/pull/2201#issuecomment-670468155 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 26s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 11s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 57s | trunk passed | | +1 :green_heart: | compile | 19m 25s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 16m 49s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 2m 34s | trunk passed | | +1 :green_heart: | mvnsite | 2m 46s | trunk passed | | +1 :green_heart: | shadedclient | 19m 30s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 48s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 2m 39s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 29s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 29s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 29s | branch/hadoop-project-dist no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 36s | the patch passed | | -1 :x: | compile | 1m 3s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | cc | 1m 3s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | golang | 1m 3s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | javac | 1m 3s | root in the patch failed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1. | | -1 :x: | compile | 0m 53s | root in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | cc | 0m 53s | root in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | golang | 0m 53s | root in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -1 :x: | javac | 0m 53s | root in the patch failed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09. | | -0 :warning: | checkstyle | 2m 19s | root: The patch generated 10 new + 113 unchanged - 2 fixed = 123 total (was 115) | | +1 :green_heart: | mvnsite | 1m 53s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 5s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 54s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 2s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 1m 57s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | findbugs | 0m 14s | hadoop-project has no data from findbugs | | +0 :ok: | findbugs | 0m 14s | hadoop-project-dist has no data from findbugs | | -1 :x: | findbugs | 2m 10s | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 12s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 0m 12s | hadoop-project-dist in the patch passed. | | -1 :x: | unit | 0m 36s | hadoop-common in the patch failed. | | +1 :green_heart: | unit | 2m 49s | hadoop-mapreduce-client-nativetask in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 129m 45s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-common-project/hadoop-common | | | Unread field:field be static? At SnappyCodec.java:[line 38] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base:
[jira] [Commented] (HADOOP-17192) ITestS3AHugeFilesSSECDiskBlock failing because of bucket overrides
[ https://issues.apache.org/jira/browse/HADOOP-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173030#comment-17173030 ] Steve Loughran commented on HADOOP-17192: - remove the override in config creation, the way we do in a lot of other tests > ITestS3AHugeFilesSSECDiskBlock failing because of bucket overrides > --- > > Key: HADOOP-17192 > URL: https://issues.apache.org/jira/browse/HADOOP-17192 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Mukund Thakur >Priority: Major > > If we set the conf "fs.s3a.bucket.mthakur-data.server-side-encryption.key" in > our test config > tests in ITestS3AHugeFilesSSECDiskBlock failing because of we are overriding > the bucket configuration thus overwriting the value of the base config set > here. > [https://github.com/apache/hadoop/blob/81da221c757bef9ec35cd190f14b2f872324c661/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesSSECDiskBlocks.java#L51] > > > Full stack stace: > {code:java} > java.lang.IllegalArgumentException: Invalid base 64 character: > ':'java.lang.IllegalArgumentException: Invalid base 64 character: ':' > at com.amazonaws.util.Base64Codec.pos(Base64Codec.java:242) at > com.amazonaws.util.Base64Codec.decode4bytes(Base64Codec.java:151) at > com.amazonaws.util.Base64Codec.decode(Base64Codec.java:230) at > com.amazonaws.util.Base64.decode(Base64.java:112) at > com.amazonaws.services.s3.AmazonS3Client.populateSSE_C(AmazonS3Client.java:4379) > at > com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1318) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$6(S3AFileSystem.java:1920) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) at > org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370) at > org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1913) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1889) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3027) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2958) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2842) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2798) > at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2772) at > org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2369) at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:361) > at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:203) > at > org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:59) > at > org.apache.hadoop.fs.s3a.scale.S3AScaleTestBase.setup(S3AScaleTestBase.java:90) > at > org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.setup(AbstractSTestS3AHugeFiles.java:78) > at > org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesSSECDiskBlocks.setup(ITestS3AHugeFilesSSECDiskBlocks.java:41) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748){code} > > I am not sure if we need to worry too much about this. We can just fix the > local test config. > CC [~ste...@apache.org] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17192) ITestS3AHugeFilesSSECDiskBlock failing because of bucket overrides
[ https://issues.apache.org/jira/browse/HADOOP-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173032#comment-17173032 ] Steve Loughran commented on HADOOP-17192: - via "removeBaseAndBucketOverrides()" > ITestS3AHugeFilesSSECDiskBlock failing because of bucket overrides > --- > > Key: HADOOP-17192 > URL: https://issues.apache.org/jira/browse/HADOOP-17192 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Mukund Thakur >Priority: Major > > If we set the conf "fs.s3a.bucket.mthakur-data.server-side-encryption.key" in > our test config > tests in ITestS3AHugeFilesSSECDiskBlock failing because of we are overriding > the bucket configuration thus overwriting the value of the base config set > here. > [https://github.com/apache/hadoop/blob/81da221c757bef9ec35cd190f14b2f872324c661/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesSSECDiskBlocks.java#L51] > > > Full stack stace: > {code:java} > java.lang.IllegalArgumentException: Invalid base 64 character: > ':'java.lang.IllegalArgumentException: Invalid base 64 character: ':' > at com.amazonaws.util.Base64Codec.pos(Base64Codec.java:242) at > com.amazonaws.util.Base64Codec.decode4bytes(Base64Codec.java:151) at > com.amazonaws.util.Base64Codec.decode(Base64Codec.java:230) at > com.amazonaws.util.Base64.decode(Base64.java:112) at > com.amazonaws.services.s3.AmazonS3Client.populateSSE_C(AmazonS3Client.java:4379) > at > com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1318) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$6(S3AFileSystem.java:1920) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) at > org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370) at > org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1913) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1889) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3027) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2958) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2842) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2798) > at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2772) at > org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2369) at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:361) > at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:203) > at > org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:59) > at > org.apache.hadoop.fs.s3a.scale.S3AScaleTestBase.setup(S3AScaleTestBase.java:90) > at > org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.setup(AbstractSTestS3AHugeFiles.java:78) > at > org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesSSECDiskBlocks.setup(ITestS3AHugeFilesSSECDiskBlocks.java:41) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) at > java.lang.Thread.run(Thread.java:748){code} > > I am not sure if we need to worry too much about this. We can just fix the > local test config. > CC [~ste...@apache.org] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17188) Support for AWS STSAssumeRoleWithWebIdentitySessionCredentialsProvider based credential provider to support use of IRSA on deployments on AWS EKS Cluster
[ https://issues.apache.org/jira/browse/HADOOP-17188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173027#comment-17173027 ] Steve Loughran commented on HADOOP-17188: - If its in the aws SDK JAR we ship -a matter of just listing it on the fs.s3a.credential.provider option * Do this, let us know how it works, and supply docs * we haven't updated the AWS SDK for a while, if that is needed, create a JIRA for that and have a go following the runbook in testing.md * if there are specific changes needed (per-bucket setting of different options..), then yes, a new provider is welcome. Ideally one we can test > Support for AWS STSAssumeRoleWithWebIdentitySessionCredentialsProvider based > credential provider to support use of IRSA on deployments on AWS EKS Cluster > - > > Key: HADOOP-17188 > URL: https://issues.apache.org/jira/browse/HADOOP-17188 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Arun Ravi M V >Priority: Minor > > The latest version of AWS SDK has support to use IRSA for providing > credentials to Kubernetes pods which can potentially replace the use of > Kube2IAM. For our Apache Spark on Kubernetes use cases, this feature will be > useful. The current Hadoop AWS component does support adding custom > credential provider but I think if we could add > STSAssumeRoleWithWebIdentitySessionCredentialsProvider support to (using > roleArn, role session name, web Identity Token File) to the hadoop-aws > library, it will be useful for the community as such who use AWS EKS. > [https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleWithWebIdentitySessionCredentialsProvider.html] > [https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleWithWebIdentitySessionCredentialsProvider.Builder.html > ] > [https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17188) Support for AWS STSAssumeRoleWithWebIdentitySessionCredentialsProvider based credential provider to support use of IRSA on deployments on AWS EKS Cluster
[ https://issues.apache.org/jira/browse/HADOOP-17188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17188: Affects Version/s: 3.3.0 > Support for AWS STSAssumeRoleWithWebIdentitySessionCredentialsProvider based > credential provider to support use of IRSA on deployments on AWS EKS Cluster > - > > Key: HADOOP-17188 > URL: https://issues.apache.org/jira/browse/HADOOP-17188 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Arun Ravi M V >Priority: Minor > > The latest version of AWS SDK has support to use IRSA for providing > credentials to Kubernetes pods which can potentially replace the use of > Kube2IAM. For our Apache Spark on Kubernetes use cases, this feature will be > useful. The current Hadoop AWS component does support adding custom > credential provider but I think if we could add > STSAssumeRoleWithWebIdentitySessionCredentialsProvider support to (using > roleArn, role session name, web Identity Token File) to the hadoop-aws > library, it will be useful for the community as such who use AWS EKS. > [https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleWithWebIdentitySessionCredentialsProvider.html] > [https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleWithWebIdentitySessionCredentialsProvider.Builder.html > ] > [https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17188) Support for AWS STSAssumeRoleWithWebIdentitySessionCredentialsProvider based credential provider to support use of IRSA on deployments on AWS EKS Cluster
[ https://issues.apache.org/jira/browse/HADOOP-17188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17188: Priority: Minor (was: Major) > Support for AWS STSAssumeRoleWithWebIdentitySessionCredentialsProvider based > credential provider to support use of IRSA on deployments on AWS EKS Cluster > - > > Key: HADOOP-17188 > URL: https://issues.apache.org/jira/browse/HADOOP-17188 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Arun Ravi M V >Priority: Minor > > The latest version of AWS SDK has support to use IRSA for providing > credentials to Kubernetes pods which can potentially replace the use of > Kube2IAM. For our Apache Spark on Kubernetes use cases, this feature will be > useful. The current Hadoop AWS component does support adding custom > credential provider but I think if we could add > STSAssumeRoleWithWebIdentitySessionCredentialsProvider support to (using > roleArn, role session name, web Identity Token File) to the hadoop-aws > library, it will be useful for the community as such who use AWS EKS. > [https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleWithWebIdentitySessionCredentialsProvider.html] > [https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/STSAssumeRoleWithWebIdentitySessionCredentialsProvider.Builder.html > ] > [https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17192) ITestS3AHugeFilesSSECDiskBlock failing because of bucket overrides
Mukund Thakur created HADOOP-17192: -- Summary: ITestS3AHugeFilesSSECDiskBlock failing because of bucket overrides Key: HADOOP-17192 URL: https://issues.apache.org/jira/browse/HADOOP-17192 Project: Hadoop Common Issue Type: Bug Components: fs/s3 Affects Versions: 3.3.0 Reporter: Mukund Thakur If we set the conf "fs.s3a.bucket.mthakur-data.server-side-encryption.key" in our test config tests in ITestS3AHugeFilesSSECDiskBlock failing because of we are overriding the bucket configuration thus overwriting the value of the base config set here. [https://github.com/apache/hadoop/blob/81da221c757bef9ec35cd190f14b2f872324c661/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesSSECDiskBlocks.java#L51] Full stack stace: {code:java} java.lang.IllegalArgumentException: Invalid base 64 character: ':'java.lang.IllegalArgumentException: Invalid base 64 character: ':' at com.amazonaws.util.Base64Codec.pos(Base64Codec.java:242) at com.amazonaws.util.Base64Codec.decode4bytes(Base64Codec.java:151) at com.amazonaws.util.Base64Codec.decode(Base64Codec.java:230) at com.amazonaws.util.Base64.decode(Base64.java:112) at com.amazonaws.services.s3.AmazonS3Client.populateSSE_C(AmazonS3Client.java:4379) at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1318) at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$6(S3AFileSystem.java:1920) at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370) at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1913) at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1889) at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3027) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2958) at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2842) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2798) at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2772) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2369) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:361) at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:203) at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:59) at org.apache.hadoop.fs.s3a.scale.S3AScaleTestBase.setup(S3AScaleTestBase.java:90) at org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.setup(AbstractSTestS3AHugeFiles.java:78) at org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesSSECDiskBlocks.setup(ITestS3AHugeFilesSSECDiskBlocks.java:41) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748){code} I am not sure if we need to worry too much about this. We can just fix the local test config. CC [~ste...@apache.org] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dbtsai opened a new pull request #2201: HADOOP-17125. Using snappy-java in SnappyCodec
dbtsai opened a new pull request #2201: URL: https://github.com/apache/hadoop/pull/2201 See https://issues.apache.org/jira/browse/HADOOP-17125 for details This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2198: YARN-10390. LeafQueue: retain user limits cache across assignContainers() calls.
hadoop-yetus commented on pull request #2198: URL: https://github.com/apache/hadoop/pull/2198#issuecomment-670372361 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 28m 51s | trunk passed | | +1 :green_heart: | compile | 0m 54s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 0m 51s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 48s | trunk passed | | +1 :green_heart: | mvnsite | 0m 54s | trunk passed | | +1 :green_heart: | shadedclient | 14m 56s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 39s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 36s | trunk passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +0 :ok: | spotbugs | 1m 52s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 50s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 50s | the patch passed | | +1 :green_heart: | compile | 0m 50s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 0m 50s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | javac | 0m 41s | the patch passed | | -0 :warning: | checkstyle | 0m 39s | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 31 new + 533 unchanged - 10 fixed = 564 total (was 543) | | +1 :green_heart: | mvnsite | 0m 45s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 10s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 32s | the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | +1 :green_heart: | findbugs | 1m 44s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 89m 19s | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | The patch does not generate ASF License warnings. | | | | 164m 56s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2198 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux af09acbefb1f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 81da221c757 | | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 | | checkstyle | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/2/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/2/testReport/ | | Max. process+thread count | 871 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2198/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was
[GitHub] [hadoop] hadoop-yetus commented on pull request #2200: MAPREDUCE-7290. ShuffleHeader should be compatible between client…
hadoop-yetus commented on pull request #2200: URL: https://github.com/apache/hadoop/pull/2200#issuecomment-670355906 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 3m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 40s | trunk passed | | +1 :green_heart: | compile | 2m 15s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | compile | 1m 55s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | checkstyle | 0m 44s | trunk passed | | +1 :green_heart: | mvnsite | 1m 10s | trunk passed | | +1 :green_heart: | shadedclient | 15m 45s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 51s | trunk passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 48s | trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +0 :ok: | spotbugs | 0m 40s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 51s | trunk passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 53s | the patch passed | | +1 :green_heart: | compile | 2m 9s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javac | 2m 9s | the patch passed | | +1 :green_heart: | compile | 1m 48s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | +1 :green_heart: | javac | 1m 48s | the patch passed | | -0 :warning: | checkstyle | 0m 37s | hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 49 new + 102 unchanged - 2 fixed = 151 total (was 104) | | +1 :green_heart: | mvnsite | 0m 53s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 33s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 | | +1 :green_heart: | javadoc | 0m 40s | the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09 | | -1 :x: | findbugs | 1m 16s | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | ||| _ Other Tests _ | | -1 :x: | unit | 6m 48s | hadoop-mapreduce-client-core in the patch passed. | | -1 :x: | unit | 2m 31s | hadoop-mapreduce-client-shuffle in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | The patch does not generate ASF License warnings. | | | | 93m 11s | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core | | | Method org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader.readByVersion(DataInput) seems to be useless At ShuffleHeader.java:useless At ShuffleHeader.java:[line 124] | | | Method org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader.writeByVersion(DataOutput) seems to be useless At ShuffleHeader.java:useless At ShuffleHeader.java:[line 147] | | | org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader$HeaderVersion defines compareTo(ShuffleHeader$HeaderVersion) and uses Object.equals() At ShuffleHeader.java:Object.equals() At ShuffleHeader.java:[lines 322-340] | | Failed junit tests | hadoop.mapreduce.task.reduce.TestFetcher | | | hadoop.mapred.TestShuffleHandler | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2200/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2200 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6be1eb8e4ed0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug