[jira] [Commented] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162474#comment-17162474
 ] 

Hudson commented on HADOOP-17138:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18462 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18462/])
HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6. (github: 
rev 1b29c9bfeee0035dd042357038b963843169d44c)
* (edit) 
hadoop-cloud-storage-project/hadoop-cos/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/ThrottledAsyncChecker.java
* (edit) hadoop-mapreduce-project/dev-support/findbugs-exclude.xml
* (edit) hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestTimelineReaderHBaseDown.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java


> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.4.0
>
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-21 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-17138:
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 3.4.0
>
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-21 Thread GitBox


iwasakims commented on pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155#issuecomment-662236964


   I merged this. Thanks, @aajisaka.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims merged pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-21 Thread GitBox


iwasakims merged pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2160: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2160:
URL: https://github.com/apache/hadoop/pull/2160#issuecomment-662223848


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   2m 23s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |  28m 55s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  23m 55s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 53s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m  4s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 12s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   3m 13s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 53s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 10s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 39s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 39s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 27s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   5m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 24s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  | 121m 34s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 22s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 324m 55s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.TestHdfsConfigFields |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2160/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2160 |
   | JIRA Issue | HDFS-15478 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 638a11bc8c16 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d23cc9d85d8 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 

[GitHub] [hadoop] iwasakims commented on pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-21 Thread GitBox


iwasakims commented on pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155#issuecomment-662212906


   Tests failed due to web authentication error. The patch does not touch 
relevant code.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#issuecomment-662208636


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 45s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 35s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  19m  6s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 45s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 27s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   2m 39s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 40s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  23m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  19m 17s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 55s |  root: The patch generated 38 new 
+ 68 unchanged - 0 fixed = 106 total (was 68)  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  findbugs  |   0m 24s |  hadoop-project has no data from 
findbugs  |
   | -1 :x: |  findbugs  |   2m 31s |  hadoop-common-project/hadoop-common 
generated 4 new + 2 unchanged - 0 fixed = 6 total (was 2)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 24s |  hadoop-project in the patch 
passed.  |
   | -1 :x: |  unit  |   9m 46s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 182m  8s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  The class name com.hadoop.compression.lzo.LzoCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzoCodec  At 
LzoCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzoCodec  At LzoCodec.java:[lines 34-51] |
   |  |  The class name com.hadoop.compression.lzo.LzopCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzopCodec  At 
LzopCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzopCodec  At LzopCodec.java:[lines 34-51] |
   |  |  The class name org.apache.hadoop.io.compress.LzoCodec shadows the 
simple name of the superclass io.airlift.compress.lzo.LzoCodec  At 
LzoCodec.java:the simple name of the superclass 
io.airlift.compress.lzo.LzoCodec  At LzoCodec.java:[line 26] |
   |  |  The class name org.apache.hadoop.io.compress.LzopCodec shadows the 
simple name of the superclass io.airlift.compress.lzo.LzopCodec  At 
LzopCodec.java:the simple name of the superclass 
io.airlift.compress.lzo.LzopCodec  At LzopCodec.java:[lines 29-40] |
   | Failed junit tests | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#issuecomment-662205279


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 28s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 38s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  20m 46s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 13s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 19s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 25s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   2m 16s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  23m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 36s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  20m 36s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  6s |  root: The patch generated 34 new 
+ 68 unchanged - 0 fixed = 102 total (was 68)  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  6s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 12s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  findbugs  |   0m 23s |  hadoop-project has no data from 
findbugs  |
   | -1 :x: |  findbugs  |   2m 27s |  hadoop-common-project/hadoop-common 
generated 4 new + 2 unchanged - 0 fixed = 6 total (was 2)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 24s |  hadoop-project in the patch 
passed.  |
   | -1 :x: |  unit  |  10m 39s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 184m 33s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  The class name com.hadoop.compression.lzo.LzoCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzoCodec  At 
LzoCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzoCodec  At LzoCodec.java:[lines 30-47] |
   |  |  The class name com.hadoop.compression.lzo.LzopCodec shadows the simple 
name of the superclass org.apache.hadoop.io.compress.LzopCodec  At 
LzopCodec.java:the simple name of the superclass 
org.apache.hadoop.io.compress.LzopCodec  At LzopCodec.java:[lines 30-47] |
   |  |  The class name org.apache.hadoop.io.compress.LzoCodec shadows the 
simple name of the superclass io.airlift.compress.lzo.LzoCodec  At 
LzoCodec.java:the simple name of the superclass 
io.airlift.compress.lzo.LzoCodec  At LzoCodec.java:[line 23] |
   |  |  The class name org.apache.hadoop.io.compress.LzopCodec shadows the 
simple name of the superclass io.airlift.compress.lzo.LzopCodec  At 
LzopCodec.java:the simple name of the superclass 
io.airlift.compress.lzo.LzopCodec  At LzopCodec.java:[lines 26-37] |
   | Failed junit tests | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155#issuecomment-662163499


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 58s |  trunk passed  |
   | +1 :green_heart: |  compile  |  26m  8s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  23m  1s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 31s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  12m 51s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 54s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 37s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-yarn in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-mapreduce-project in trunk failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 28s |  hadoop-sls in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-cos in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   6m 30s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 43s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 24s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   3m 38s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |  15m 31s |  hadoop-yarn-project/hadoop-yarn in trunk 
has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   0m 48s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 in trunk has 2 extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   4m 49s |  hadoop-mapreduce-project in trunk has 2 
extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   0m 46s |  hadoop-tools/hadoop-sls in trunk has 1 
extant findbugs warnings.  |
   | -1 :x: |  findbugs  |   0m 41s |  hadoop-cloud-storage-project/hadoop-cos 
in trunk has 1 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   9m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 40s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 48s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 48s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 48s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  11m 10s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  9s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 29s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 32s |  hadoop-yarn in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-mapreduce-project in the patch 
failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-sls in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-cos in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   5m 53s 

[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458446690



##
File path: hadoop-project/pom.xml
##
@@ -1727,6 +1728,11 @@
 jna
 ${jna.version}
   
+  

Review comment:
   @steveloughran we would like to actually bundle this jar into common 
since this is a very clean jar and used in many projects such as presto and orc 
to provide their compression codecs. 
   
   In fact, we would like to have snappy to fallback to aircompressor if no 
native lib is provided. For many of our devs, it's non-trivial to setup native 
libs in their development env since it requires to compile many native libs 
from sources and install them into LD path. If we can fallback to pure java 
snappy implementation in aircompressor, it will make the developers' life a way 
easier. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458444739



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/LzopCodec.java
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress;
+
+import io.airlift.compress.lzo.LzoCodec;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+public class LzopCodec extends io.airlift.compress.lzo.LzopCodec
+implements Configurable, CompressionCodec {
+@Override
+public Class getCompressorType()
+{
+return LzopCodec.HadoopLzopCompressor.class;
+}
+
+/**
+ * No Hadoop code seems to actually use the compressor, so just return a 
dummy one so the createOutputStream method

Review comment:
   The test code is calling `createCompressor` without using the actual 
implementation. Without this, the test will not pass.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458444394



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/LzopCodec.java
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress;
+
+import io.airlift.compress.lzo.LzoCodec;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+public class LzopCodec extends io.airlift.compress.lzo.LzopCodec
+implements Configurable, CompressionCodec {
+@Override
+public Class getCompressorType()
+{
+return LzopCodec.HadoopLzopCompressor.class;
+}
+
+/**
+ * No Hadoop code seems to actually use the compressor, so just return a 
dummy one so the createOutputStream method
+ * with a compressor can function.  This interface can be implemented if 
needed.
+ */
+@DoNotPool
+static class HadoopLzopCompressor
+implements Compressor

Review comment:
   Done. I'm using intelji with default java formatter. Does hadoop provide 
a codestyle formatter that Intelji can use? Thanks. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458443904



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class LzoCodec extends org.apache.hadoop.io.compress.LzoCodec {
+private static final Log LOG = LogFactory.getLog(LzoCodec.class);
+
+static final String gplLzoCodec = LzoCodec.class.getName();
+static final String hadoopLzoCodec = 
org.apache.hadoop.io.compress.LzoCodec.class.getName();
+static boolean warned = false;
+
+static {
+LOG.info("Bridging " + gplLzoCodec + " to " + hadoopLzoCodec + ".");

Review comment:
   Done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458433596



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class LzoCodec extends org.apache.hadoop.io.compress.LzoCodec {
+private static final Log LOG = LogFactory.getLog(LzoCodec.class);
+
+static final String gplLzoCodec = LzoCodec.class.getName();

Review comment:
   Addressed. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458433541



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class LzoCodec extends org.apache.hadoop.io.compress.LzoCodec {
+private static final Log LOG = LogFactory.getLog(LzoCodec.class);
+
+static final String gplLzoCodec = LzoCodec.class.getName();
+static final String hadoopLzoCodec = 
org.apache.hadoop.io.compress.LzoCodec.class.getName();
+static boolean warned = false;
+
+static {
+LOG.info("Bridging " + gplLzoCodec + " to " + hadoopLzoCodec + ".");
+}
+
+@Override
+public CompressionOutputStream createOutputStream(OutputStream out,
+  Compressor compressor) 
throws IOException {
+if (!warned) {

Review comment:
   Addressed. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #2160: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-21 Thread GitBox


umamaheswararao commented on pull request #2160:
URL: https://github.com/apache/hadoop/pull/2160#issuecomment-662134975


   Thanks for the review @ayushtkn. Updated the doc.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458380342



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;

Review comment:
   @steveloughran this might answer why `Not sure why the com.hadoop 
classes are there at all.`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458378806



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;

Review comment:
   Done. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458377712



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+
+import org.apache.commons.logging.Log;

Review comment:
   addressed. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458374873



##
File path: hadoop-project/pom.xml
##
@@ -1727,6 +1728,11 @@
 jna
 ${jna.version}
   
+  

Review comment:
   Since we don't depend on the test scope, I think we should be fine.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17106) Add unguava implementation for Joiner in hadoop.StringUtils

2020-07-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162327#comment-17162327
 ] 

Hadoop QA commented on HADOOP-17106:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
28s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
25s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 2 
extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 116 unchanged - 2 fixed = 116 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 29s{color} 
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestIngressPortBasedResolver |
|   | hadoop.fs.TestFsShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/17057/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17106 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008113/HADOOP-17106.002.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 1e30c41243b5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / d23cc9d85d8 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| findbugs | 

[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458373041



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/LzopCodec.java
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress;
+
+import io.airlift.compress.lzo.LzoCodec;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+public class LzopCodec extends io.airlift.compress.lzo.LzopCodec
+implements Configurable, CompressionCodec {
+@Override
+public Class getCompressorType()
+{
+return LzopCodec.HadoopLzopCompressor.class;

Review comment:
   Good catch. Ideally yes, but `HadoopLzoCompressor` is private in 
airlift, so I can not easily do it. I added a new override
   ```
   @Override
   public Compressor createCompressor()
   {
   return new HadoopLzopCompressor();
   }
   ```
   to keep them in sync, and I will try to work with aircompressor to move this 
to their codebase.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


dbtsai commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458358427



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;

Review comment:
   Historically, Hadoop included LZO until it was removed in 
[HADOOP-4874](https://issues.apache.org/jira/browse/HADOOP-4874) due to GPL 
licensing concern. Then the GPL LZO codec was maintained as a separate project 
in https://github.com/twitter/hadoop-lzo with new codec class 
`com.hadoop.compression.lzo.LzoCodec`. In Hadoop's sequence file, the first 
couple bytes of file includes the class of the compression codec used when 
writing, and hadoop uses this information to pick up the right codec to read 
the data. As a result, we need to bridge it in order to enable Hadoop to read 
the old LZO compressed data.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17106) Add unguava implementation for Joiner in hadoop.StringUtils

2020-07-21 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162300#comment-17162300
 ] 

Ayush Saxena commented on HADOOP-17106:
---

Thanx [~ahussein], yes adding wrappers to stringUtils can work but we should 
make sure we don't copy them from gauva implementations, I am pretty sure that 
would fetch us some trouble. And regarding the added utils, these should be 
taken to branch-2 as well, otherwise cherry-picking new commits using these 
utils to branch-2 would create trouble..


> Add unguava implementation for Joiner in hadoop.StringUtils
> ---
>
> Key: HADOOP-17106
> URL: https://issues.apache.org/jira/browse/HADOOP-17106
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17106.001.patch, HADOOP-17106.002.patch
>
>
> In order to replace  {{com.google.common.base.Joiner}} with a non-guava 
> implementation:
> * Extend implementation of {{org.apache.hadoop.util.StringUtils.join()}} to 
> support interfaces offered by guava and used throughout hadoop source code.
> * {{org.apache.hadoop.util.StringUtils.join()}} should have the same behavior 
> as {{guava.Joiner}}. For example, by default it throws NPE if any of the 
> elements is null.
> * Another Jira should to be created to do the actual call replacement in 67 
> source files.  
> * Another Jira should be created to do the same for 
> apache.common.stringUtils. We do not want to have different implementations 
> of the same functionality, while each may have different behavior. Also, when 
> we have the implementation of Join, in 1 single source code, we can use 
> apache.commons.StringUtils internally without doing invasive code changes to 
> the entire source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Joiner' in project with mask 
> '*.java'
> Found Occurrences  (103 usages found)
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> SimpleKMSAuditLogger.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.fs  (1 usage found)
> TestPath.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.fs.s3a  (1 usage found)
> StorageStatisticsTracker.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> org.apache.hadoop.ha  (1 usage found)
> TestHAAdmin.java  (1 usage found)
> 34 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs  (8 usages found)
> DFSClient.java  (1 usage found)
> 196 import com.google.common.base.Joiner;
> DFSTestUtil.java  (1 usage found)
> 76 import com.google.common.base.Joiner;
> DFSUtil.java  (1 usage found)
> 108 import com.google.common.base.Joiner;
> DFSUtilClient.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> HAUtil.java  (1 usage found)
> 59 import com.google.common.base.Joiner;
> MiniDFSCluster.java  (1 usage found)
> 145 import com.google.common.base.Joiner;
> StripedFileTestUtil.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestDFSUpgrade.java  (1 usage found)
> 53 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocol  (1 usage found)
> LayoutFlags.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocolPB  (1 usage found)
> TestPBHelper.java  (1 usage found)
> 118 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 43 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal.client  (5 usages found)
> AsyncLoggerSet.java  (1 usage found)
> 38 import com.google.common.base.Joiner;
> QuorumCall.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> QuorumException.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> QuorumJournalManager.java  (1 usage found)
> 62 import com.google.common.base.Joiner;
> TestQuorumCall.java  (1 usage found)
> 29 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.blockmanagement  (4 usages found)
> HostSet.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> TestBlockManager.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestBlockReportRateLimiting.java  (1 usage found)
>  

[jira] [Commented] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in Hadoop

2020-07-21 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162297#comment-17162297
 ] 

Ayush Saxena commented on HADOOP-17100:
---

Thanx [~ahussein] for the patches, can you give a check to the jenkins 
complains and confirm

> Replace Guava Supplier with Java8+ Supplier in Hadoop
> -
>
> Key: HADOOP-17100
> URL: https://issues.apache.org/jira/browse/HADOOP-17100
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17100.001.patch, HADOOP-17100.002.patch, 
> HADOOP-17100.003.patch, HADOOP-17100.006.patch, 
> HADOOP-17100.branch-3.1.006.patch, HADOOP-17100.branch-3.2.006.patch, 
> HADOOP-17100.branch-3.3.006.patch
>
>
> Replacing Usage of {{guava.Supplier<>}} are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in Hadoop project.
>  * To make things more convenient for reviewers, I decided:
>  ** Not to replace Object instantiation by lambda expressions because this 
> will increase the patch size significantly and require code adjustments to 
> pass the checkstyle scripts.
>  ** Not to refactor the imports because this will make reading the patch more 
> difficult.
>  * Merge should be done to the following branches: trunk, branch-3.3, 
> branch-3.2, branch-3.1
> The task is straightforward because {{java.util.Supplier}} has the same API 
> as {{guava.Supplier<>}} and the vast majority of usage comes from Test-units.
>  Therefore, we need only to do the following a "one-line" change in all 147 
> files.
> {code:bash}
>  
> -import com.google.common.base.Supplier;
> +import java.util.function.Supplier;
> {code}
> The code change needs to be applied to the following list of files:
> {code:java}
>  
> Targets 
> Occurrences of 'com.google.common.base.Supplier' in project with mask 
> '*.java' 
> Found Occurrences (146 usages found) 
> org.apache.hadoop.conf (1 usage found) 
> TestReconfiguration.java (1 usage found) 
> 21 import com.google.common.base.Supplier; 
> org.apache.hadoop.crypto.key.kms.server (1 usage found) 
> TestKMS.java (1 usage found) 
> 20 import com.google.common.base.Supplier; 
> org.apache.hadoop.fs (2 usages found) 
> FCStatisticsBaseTest.java (1 usage found) 
> 40 import com.google.common.base.Supplier; 
> TestEnhancedByteBufferAccess.java (1 usage found) 
> 75 import com.google.common.base.Supplier; 
> org.apache.hadoop.fs.viewfs (1 usage found) 
> TestViewFileSystemWithTruncate.java (1 usage found) 
> 23 import com.google.common.base.Supplier; 
> org.apache.hadoop.ha (1 usage found) 
> TestZKFailoverController.java (1 usage found) 
> 25 import com.google.common.base.Supplier; 
> org.apache.hadoop.hdfs (20 usages found) 
> DFSTestUtil.java (1 usage found) 
> 79 import com.google.common.base.Supplier; 
> MiniDFSCluster.java (1 usage found) 
> 78 import com.google.common.base.Supplier; 
> TestBalancerBandwidth.java (1 usage found) 
> 29 import com.google.common.base.Supplier; 
> TestClientProtocolForPipelineRecovery.java (1 usage found) 
> 30 import com.google.common.base.Supplier; 
> TestDatanodeRegistration.java (1 usage found) 
> 44 import com.google.common.base.Supplier; 
> TestDataTransferKeepalive.java (1 usage found) 
> 47 import com.google.common.base.Supplier; 
> TestDeadNodeDetection.java (1 usage found) 
> 20 import com.google.common.base.Supplier; 
> TestDecommission.java (1 usage found) 
> 41 import com.google.common.base.Supplier; 
> TestDFSShell.java (1 usage found) 
> 37 import com.google.common.base.Supplier; 
> TestEncryptedTransfer.java (1 usage found) 
> 35 import com.google.common.base.Supplier; 
> TestEncryptionZonesWithKMS.java (1 usage found) 
> 22 import com.google.common.base.Supplier; 
> TestFileCorruption.java (1 usage found) 
> 21 import com.google.common.base.Supplier; 
> TestLeaseRecovery2.java (1 usage found) 
> 32 import com.google.common.base.Supplier; 
> TestLeaseRecoveryStriped.java (1 usage found) 
> 21 import com.google.common.base.Supplier; 
> TestMaintenanceState.java (1 usage found) 
> 63 import com.google.common.base.Supplier; 
> TestPread.java (1 usage found) 
> 61 import com.google.common.base.Supplier; 
> TestQuota.java (1 usage found) 
> 39 import com.google.common.base.Supplier; 
> TestReplaceDatanodeOnFailure.java (1 

[GitHub] [hadoop] sunchao commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


sunchao commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458319191



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzopCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;

Review comment:
   I think this is to offer a bridge for those who are using 
[hadoop-lzo](https://mvnrepository.com/artifact/com.hadoop.gplcompression/hadoop-lzo/0.4.16)
 library.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


sunchao commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458317946



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/LzopCodec.java
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress;
+
+import io.airlift.compress.lzo.LzoCodec;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+public class LzopCodec extends io.airlift.compress.lzo.LzopCodec
+implements Configurable, CompressionCodec {
+@Override
+public Class getCompressorType()
+{
+return LzopCodec.HadoopLzopCompressor.class;
+}
+
+/**
+ * No Hadoop code seems to actually use the compressor, so just return a 
dummy one so the createOutputStream method

Review comment:
   I only know Presto uses Lzop codec. Maybe we can skip this for this PR 
and focs on LzoCodec @dbtsai?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


jojochuang commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r458296945



##
File path: hadoop-project/pom.xml
##
@@ -1727,6 +1728,11 @@
 jna
 ${jna.version}
   
+  

Review comment:
   Looks okay to me 
https://mvnrepository.com/artifact/io.airlift/aircompressor/0.16
   No compile time dependency. A shaded Hadoop 2 jar in provided dependency.
   
   
   What's tricky is it has test scope dependency on jmh-core, which is GPL 2.0.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17135) ITestS3GuardOutOfBandOperations testListingDelete[auth=false] fails on unversioned bucket

2020-07-21 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17135:

Parent: HADOOP-16829
Issue Type: Sub-task  (was: Bug)

> ITestS3GuardOutOfBandOperations testListingDelete[auth=false] fails on 
> unversioned bucket
> -
>
> Key: HADOOP-17135
> URL: https://issues.apache.org/jira/browse/HADOOP-17135
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Minor
>
> transient failure of 
> {code}
> [ERROR] Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 369.68 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations
> [ERROR] 
> testListingDelete[auth=false](org.apache.hadoop.fs.s3a.ITestS3GuardOutOfBandOperations)
>   Time elapsed: 19.103 s  <<< ERROR!
> java.util.concurrent.ExecutionException: java.io.FileNotFoundException: No 
> such file or directory: 
> s3a://stevel-ireland/fork-0001/test/dir-e34a122f-a04d-48c3-90c1-9d35427fa939/file-1-e34a122f-a04d-48c3-90c1-9d35427fa939
> {code}
> config is unguarded codepath, s3a status was not passed down. Test run was 
> really, really slow (8 threads)
> hypothesis: test run took so long that a TTL expired and the open operation 
> did a HEAD to s3 even when a s3guard record was found.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2163: HDFS-15480. Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2163:
URL: https://github.com/apache/hadoop/pull/2163#issuecomment-662006657


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  docker  |  33m 11s |  Docker failed to build 
yetus/hadoop:cce5a6f6094.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2163 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2163/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17106) Add unguava implementation for Joiner in hadoop.StringUtils

2020-07-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17106:
---
Attachment: HADOOP-17106.002.patch

> Add unguava implementation for Joiner in hadoop.StringUtils
> ---
>
> Key: HADOOP-17106
> URL: https://issues.apache.org/jira/browse/HADOOP-17106
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17106.001.patch, HADOOP-17106.002.patch
>
>
> In order to replace  {{com.google.common.base.Joiner}} with a non-guava 
> implementation:
> * Extend implementation of {{org.apache.hadoop.util.StringUtils.join()}} to 
> support interfaces offered by guava and used throughout hadoop source code.
> * {{org.apache.hadoop.util.StringUtils.join()}} should have the same behavior 
> as {{guava.Joiner}}. For example, by default it throws NPE if any of the 
> elements is null.
> * Another Jira should to be created to do the actual call replacement in 67 
> source files.  
> * Another Jira should be created to do the same for 
> apache.common.stringUtils. We do not want to have different implementations 
> of the same functionality, while each may have different behavior. Also, when 
> we have the implementation of Join, in 1 single source code, we can use 
> apache.commons.StringUtils internally without doing invasive code changes to 
> the entire source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Joiner' in project with mask 
> '*.java'
> Found Occurrences  (103 usages found)
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> SimpleKMSAuditLogger.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.fs  (1 usage found)
> TestPath.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.fs.s3a  (1 usage found)
> StorageStatisticsTracker.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> org.apache.hadoop.ha  (1 usage found)
> TestHAAdmin.java  (1 usage found)
> 34 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs  (8 usages found)
> DFSClient.java  (1 usage found)
> 196 import com.google.common.base.Joiner;
> DFSTestUtil.java  (1 usage found)
> 76 import com.google.common.base.Joiner;
> DFSUtil.java  (1 usage found)
> 108 import com.google.common.base.Joiner;
> DFSUtilClient.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> HAUtil.java  (1 usage found)
> 59 import com.google.common.base.Joiner;
> MiniDFSCluster.java  (1 usage found)
> 145 import com.google.common.base.Joiner;
> StripedFileTestUtil.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestDFSUpgrade.java  (1 usage found)
> 53 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocol  (1 usage found)
> LayoutFlags.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocolPB  (1 usage found)
> TestPBHelper.java  (1 usage found)
> 118 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 43 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal.client  (5 usages found)
> AsyncLoggerSet.java  (1 usage found)
> 38 import com.google.common.base.Joiner;
> QuorumCall.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> QuorumException.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> QuorumJournalManager.java  (1 usage found)
> 62 import com.google.common.base.Joiner;
> TestQuorumCall.java  (1 usage found)
> 29 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.blockmanagement  (4 usages found)
> HostSet.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> TestBlockManager.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestBlockReportRateLimiting.java  (1 usage found)
> 24 import com.google.common.base.Joiner;
> TestPendingDataNodeMessages.java  (1 usage found)
> 41 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.common  (1 usage found)
> StorageInfo.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-07-21 Thread GitBox


hadoop-yetus removed a comment on pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#issuecomment-65960


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  37m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
17 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 36s |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 18s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  20m 59s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  5s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 41s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m 19s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 43s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 40s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  7s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | -1 :x: |  javac  |  24m  7s |  
root-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 generated 4 new + 2047 unchanged - 4 
fixed = 2051 total (was 2051)  |
   | +1 :green_heart: |  compile  |  20m 21s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  javac  |  20m 21s |  
root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~18.04-b09 with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09 generated 4 new + 1939 unchanged - 4 
fixed = 1943 total (was 1943)  |
   | -0 :warning: |  checkstyle  |   3m 19s |  root: The patch generated 53 new 
+ 63 unchanged - 1 fixed = 116 total (was 64)  |
   | +1 :green_heart: |  mvnsite  |   2m 38s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 14s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   1m 32s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 37s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   1m 28s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 233m 41s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to leafMarkers in 
org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, int, boolean, 
StoreContext, OperationCallbacks)  At 
MarkerTool.java:org.apache.hadoop.fs.s3a.tools.MarkerTool.scan(Path, boolean, 
int, boolean, StoreContext, OperationCallbacks)  At MarkerTool.java:[line 187] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2149/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2149 |
   | Optional Tests | dupname asflicense compile javac javadoc 

[GitHub] [hadoop] szetszwo commented on a change in pull request #2163: HDFS-15480. Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-21 Thread GitBox


szetszwo commented on a change in pull request #2163:
URL: https://github.com/apache/hadoop/pull/2163#discussion_r458264558



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
##
@@ -359,14 +359,6 @@ public int getListLimit() {
 DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED_DEFAULT);
 LOG.info("{} = {}", DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED,
 snapshotDeletionOrdered);
-if (snapshotDeletionOrdered && !xattrsEnabled) {

Review comment:
   Why removing the check?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
##
@@ -263,11 +269,18 @@ static SnapshotDiffReportListing 
getSnapshotDiffReportListing(FSDirectory fsd,
   final int earliest = snapshottable.getDiffs().iterator().next()
   .getSnapshotId();
   if (snapshot.getId() != earliest) {
-throw new SnapshotException("Failed to delete snapshot " + snapshotName
-+ " from directory " + srcRoot.getFullPathName()
-+ ": " + snapshot + " is not the earliest snapshot id=" + earliest
-+ " (" + DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED
-+ " is " + fsd.isSnapshotDeletionOrdered() + ")");
+final XAttr snapshotXAttr = buildXAttr(snapshotName);
+final List xattrs = Lists.newArrayListWithCapacity(1);
+xattrs.add(snapshotXAttr);
+
+// The snapshot to be deleted is just marked for deletion in the xAttr.
+// Same snaphot delete call can happen multiple times until annd unless
+// the very 1st instance of a snapshot delete hides it/remove it from
+// snapshot list. XAttrSetFlag.REPLACE needs to be set to here in order
+// to address this.
+FSDirXAttrOp.unprotectedSetXAttrs(fsd, iip, xattrs,
+EnumSet.of(XAttrSetFlag.CREATE, XAttrSetFlag.REPLACE));
+return null;

Review comment:
   We cannot return null since it will skip the 
fsd.getEditLog().logDeleteSnapshot(..).

##
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
##
@@ -5119,7 +5119,14 @@
 for storing directory snapshot diffs. By default, value is set to 10.
   
 
-
+
+  dfs.namenode.snapshot.deletion.ordered

Review comment:
   @arp7 seems to suggest not to add the conf in hdfs-default.xml.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
##
@@ -326,6 +323,11 @@ static INode unprotectedSetXAttrs(
 throw new IOException("Can only set '" +
 SECURITY_XATTR_UNREADABLE_BY_SUPERUSER + "' on a file.");
   }
+
+  if (xaName.contains(SNAPSHOT_XATTR_NAME)) {
+Preconditions.checkArgument(inode.isDirectory() &&

Review comment:
   To be consistent with SECURITY_XATTR_UNREADABLE_BY_SUPERUSER, should we 
throw IOException?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17106) Add unguava implementation for Joiner in hadoop.StringUtils

2020-07-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17106:
---
Description: 
In order to replace  {{com.google.common.base.Joiner}} with a non-guava 
implementation:
* Extend implementation of {{org.apache.hadoop.util.StringUtils.join()}} to 
support interfaces offered by guava and used throughout hadoop source code.
* {{org.apache.hadoop.util.StringUtils.join()}} should have the same behavior 
as {{guava.Joiner}}. For example, by default it throws NPE if any of the 
elements is null.
* Another Jira should to be created to do the actual call replacement in 67 
source files.  
* Another Jira should be created to do the same for apache.common.stringUtils. 
We do not want to have different implementations of the same functionality, 
while each may have different behavior. Also, when we have the implementation 
of Join, in 1 single source code, we can use apache.commons.StringUtils 
internally without doing invasive code changes to the entire source code.
 
{code:java}
Targets
Occurrences of 'com.google.common.base.Joiner' in project with mask '*.java'
Found Occurrences  (103 usages found)
org.apache.hadoop.crypto.key.kms.server  (1 usage found)
SimpleKMSAuditLogger.java  (1 usage found)
26 import com.google.common.base.Joiner;
org.apache.hadoop.fs  (1 usage found)
TestPath.java  (1 usage found)
37 import com.google.common.base.Joiner;
org.apache.hadoop.fs.s3a  (1 usage found)
StorageStatisticsTracker.java  (1 usage found)
25 import com.google.common.base.Joiner;
org.apache.hadoop.ha  (1 usage found)
TestHAAdmin.java  (1 usage found)
34 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs  (8 usages found)
DFSClient.java  (1 usage found)
196 import com.google.common.base.Joiner;
DFSTestUtil.java  (1 usage found)
76 import com.google.common.base.Joiner;
DFSUtil.java  (1 usage found)
108 import com.google.common.base.Joiner;
DFSUtilClient.java  (1 usage found)
20 import com.google.common.base.Joiner;
HAUtil.java  (1 usage found)
59 import com.google.common.base.Joiner;
MiniDFSCluster.java  (1 usage found)
145 import com.google.common.base.Joiner;
StripedFileTestUtil.java  (1 usage found)
20 import com.google.common.base.Joiner;
TestDFSUpgrade.java  (1 usage found)
53 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.protocol  (1 usage found)
LayoutFlags.java  (1 usage found)
26 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.protocolPB  (1 usage found)
TestPBHelper.java  (1 usage found)
118 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.qjournal  (1 usage found)
MiniJournalCluster.java  (1 usage found)
43 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.qjournal.client  (5 usages found)
AsyncLoggerSet.java  (1 usage found)
38 import com.google.common.base.Joiner;
QuorumCall.java  (1 usage found)
32 import com.google.common.base.Joiner;
QuorumException.java  (1 usage found)
25 import com.google.common.base.Joiner;
QuorumJournalManager.java  (1 usage found)
62 import com.google.common.base.Joiner;
TestQuorumCall.java  (1 usage found)
29 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.blockmanagement  (4 usages found)
HostSet.java  (1 usage found)
21 import com.google.common.base.Joiner;
TestBlockManager.java  (1 usage found)
20 import com.google.common.base.Joiner;
TestBlockReportRateLimiting.java  (1 usage found)
24 import com.google.common.base.Joiner;
TestPendingDataNodeMessages.java  (1 usage found)
41 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.common  (1 usage found)
StorageInfo.java  (1 usage found)
37 import com.google.common.base.Joiner;
org.apache.hadoop.hdfs.server.datanode  (7 usages found)
BlockPoolManager.java  (1 usage found)
32 import com.google.common.base.Joiner;
BlockRecoveryWorker.java  (1 usage found)
21 import com.google.common.base.Joiner;
BPServiceActor.java  (1 usage found)
75 import com.google.common.base.Joiner;
DataNode.java  (1 usage found)
226 import com.google.common.base.Joiner;
ShortCircuitRegistry.java  (1 usage found)
49 import com.google.common.base.Joiner;
TestDataNodeHotSwapVolumes.java  (1 usage found)
21 import com.google.common.base.Joiner;

[jira] [Commented] (HADOOP-17092) ABFS: Long waits and unintended retries when multiple threads try to fetch token using ClientCreds

2020-07-21 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162166#comment-17162166
 ] 

Hudson commented on HADOOP-17092:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18461/])
HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw (github: rev 
b4b23ef0d1a0afe6251370a61f922ecdb1624165)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java
* (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/ExponentialRetryPolicy.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAzureADAuthenticator.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java


> ABFS: Long waits and unintended retries when multiple threads try to fetch 
> token using ClientCreds
> --
>
> Key: HADOOP-17092
> URL: https://issues.apache.org/jira/browse/HADOOP-17092
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Bilahari T H
>Priority: Major
>  Labels: abfsactive
> Fix For: 3.4.0
>
>
> Issue reported by DB:
> we recently experienced some problems with ABFS driver that highlighted a 
> possible issue with long hangs following synchronized retries when using the 
> _ClientCredsTokenProvider_ and calling _AbfsClient.getAccessToken_. We have 
> seen 
> [https://github.com/apache/hadoop/pull/1923|https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fhadoop%2Fpull%2F1923=02%7c01%7csnvijaya%40microsoft.com%7c7362c5ba4af24a553c4308d807ec459d%7c72f988bf86f141af91ab2d7cd011db47%7c1%7c0%7c637268058650442694=FePBBkEqj5kI2Ty4kNr3a2oJgB8Kvy3NvyRK8NoxyH4%3D=0],
>  but it does not directly apply since we are not using a custom token 
> provider, but instead _ClientCredsTokenProvider_ that ultimately relies on 
> _AzureADAuthenticator_. 
>  
> The problem was that the critical section of getAccessToken, combined with a 
> possibly redundant retry policy, made jobs hanging for a very long time, 
> since only one thread at a time could make progress, and this progress 
> amounted to basically retrying on a failing connection for 30-60 minutes.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2145: HADOOP-17133. Implement HttpServer2 metrics

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2145:
URL: https://github.com/apache/hadoop/pull/2145#issuecomment-661972540


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 46s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 22s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 40s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 55s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 32s |  hadoop-kms in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-hdfs-httpfs in trunk failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 16s |  hadoop-common-project/hadoop-common in 
trunk has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 52s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  22m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 39s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  20m 39s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m 18s |  root: The patch generated 1 new 
+ 227 unchanged - 0 fixed = 228 total (was 227)  |
   | +1 :green_heart: |  mvnsite  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | -1 :x: |  shadedclient  |  16m 51s |  patch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 43s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-kms in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-hdfs-httpfs in the patch failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   4m 49s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m  5s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |   2m  2s |  hadoop-kms in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 32s |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 191m 32s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.log.TestLogLevel |
   |   | hadoop.crypto.key.kms.server.TestKMS |
   |   | hadoop.crypto.key.kms.server.TestKMSWithZK |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2145/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2145 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8af58812b698 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d57462f2dae |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK 

[jira] [Created] (HADOOP-17146) Replace Guava.Splitter with common.util.StringUtils

2020-07-21 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17146:
--

 Summary: Replace Guava.Splitter with common.util.StringUtils
 Key: HADOOP-17146
 URL: https://issues.apache.org/jira/browse/HADOOP-17146
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: common
Reporter: Ahmed Hussein


Hadoop source code uses {{com.google.common.base.Splitter}} . We need to 
analyze the performance overhead of splitter and consider different 
implementations such as apache-commons.

Hadoop has an implementation {{org.apache.hadoop.util.StringUtils.split()}}. 
Therefore, we should all split() calls use the wrapper in common. This will 
make the utility calls less invasive and confusing as the behavior of 
apache-commons.stringUtils is not the same as guava.

Once we have the wrapper, and all calls are using that wrapper, we can decide 
to use apache-commons or do specific optimizations without changing the entire 
source code.

 
{code:bash}
Targets
Occurrences of 'import com.google.common.base.Splitter;' in project with 
mask '*.java'
Found Occurrences  (18 usages found)
org.apache.hadoop.crypto  (1 usage found)
CryptoCodec.java  (1 usage found)
34 import com.google.common.base.Splitter;
org.apache.hadoop.mapred.nativetask.kvtest  (1 usage found)
KVTest.java  (1 usage found)
44 import com.google.common.base.Splitter;
org.apache.hadoop.mapreduce.v2.util  (1 usage found)
MRWebAppUtil.java  (1 usage found)
20 import com.google.common.base.Splitter;
org.apache.hadoop.metrics2.impl  (1 usage found)
MetricsConfig.java  (1 usage found)
32 import com.google.common.base.Splitter;
org.apache.hadoop.registry.client.impl.zk  (1 usage found)
RegistrySecurity.java  (1 usage found)
22 import com.google.common.base.Splitter;
org.apache.hadoop.security.authentication.server  (1 usage found)
MultiSchemeAuthenticationHandler.java  (1 usage found)
34 import com.google.common.base.Splitter;
org.apache.hadoop.security.token.delegation.web  (1 usage found)
MultiSchemeDelegationTokenAuthenticationHandler.java  (1 usage found)
40 import com.google.common.base.Splitter;
org.apache.hadoop.tools.dynamometer  (1 usage found)
Client.java  (1 usage found)
22 import com.google.common.base.Splitter;
org.apache.hadoop.tools.dynamometer.workloadgenerator.audit  (2 usages 
found)
AuditLogDirectParser.java  (1 usage found)
20 import com.google.common.base.Splitter;
AuditReplayThread.java  (1 usage found)
20 import com.google.common.base.Splitter;
org.apache.hadoop.util  (2 usages found)
TestApplicationClassLoader.java  (1 usage found)
44 import com.google.common.base.Splitter;
ZKUtil.java  (1 usage found)
31 import com.google.common.base.Splitter;
org.apache.hadoop.yarn.api.records.timeline  (1 usage found)
TimelineEntityGroupId.java  (1 usage found)
27 import com.google.common.base.Splitter;
org.apache.hadoop.yarn.server.resourcemanager.scheduler  (1 usage found)
QueueMetrics.java  (1 usage found)
55 import com.google.common.base.Splitter;
org.apache.hadoop.yarn.util  (1 usage found)
StringHelper.java  (1 usage found)
21 import com.google.common.base.Splitter;
org.apache.hadoop.yarn.webapp  (1 usage found)
WebApp.java  (1 usage found)
37 import com.google.common.base.Splitter;
org.apache.hadoop.yarn.webapp.hamlet  (1 usage found)
HamletImpl.java  (1 usage found)
22 import com.google.common.base.Splitter;
org.apache.hadoop.yarn.webapp.hamlet2  (1 usage found)
HamletImpl.java  (1 usage found)
22 import com.google.common.base.Splitter;

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ merged pull request #2150: Hadoop 17132. ABFS: Fix Rename and Delete Idempotency check trigger

2020-07-21 Thread GitBox


DadanielZ merged pull request #2150:
URL: https://github.com/apache/hadoop/pull/2150


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ merged pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-21 Thread GitBox


DadanielZ merged pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #2163: HDFS-15480. Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-21 Thread GitBox


mukul1987 commented on a change in pull request #2163:
URL: https://github.com/apache/hadoop/pull/2163#discussion_r458204825



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
##
@@ -41,10 +41,7 @@
 import java.util.List;
 import java.util.ListIterator;
 
-import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.CRYPTO_XATTR_ENCRYPTION_ZONE;
-import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.SECURITY_XATTR_UNREADABLE_BY_SUPERUSER;
-import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.XATTR_SATISFY_STORAGE_POLICY;
-import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.CRYPTO_XATTR_FILE_ENCRYPTION_INFO;
+import static org.apache.hadoop.hdfs.server.common.HdfsServerConstants.*;

Review comment:
   Expand the wildcard imports





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-07-21 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HADOOP-17144:

Attachment: HADOOP-17144.001.patch
Status: Patch Available  (was: Open)

> Update Hadoop's lz4 to v1.9.2
> -
>
> Key: HADOOP-17144
> URL: https://issues.apache.org/jira/browse/HADOOP-17144
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HADOOP-17144.001.patch
>
>
> Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2164: YARN-10358. Fix findbugs warnings in hadoop-yarn-project on branch-2.10.

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2164:
URL: https://github.com/apache/hadoop/pull/2164#issuecomment-661926689


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  10m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2.10 Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   ||| _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m 14s |  ASF License check generated no 
output?  |
   |  |   |  12m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2164/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2164 |
   | Optional Tests | dupname asflicense xml |
   | uname | Linux dd4945887634 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | branch-2.10 / 8cd8b41 |
   | Max. process+thread count | 31 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2164/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request #2164: YARN-10358. Fix findbugs warnings in hadoop-yarn-project on branch-2.10.

2020-07-21 Thread GitBox


iwasakims opened a new pull request #2164:
URL: https://github.com/apache/hadoop/pull/2164


   Please refer to the comments of 
[YARN-10358](https://issues.apache.org/jira/browse/YARN-10358) for description.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2163: HDFS-15480. Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2163:
URL: https://github.com/apache/hadoop/pull/2163#issuecomment-661907416


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 25s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m  7s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 28s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m  6s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m  1s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 42s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 3 new + 90 unchanged - 0 fixed = 93 total (was 90)  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 109m 38s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 185m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2163/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2163 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 87a0f4270e08 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d57462f2dae |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2163/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#issuecomment-661865217


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 47s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 52s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   1m 11s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 33s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  75m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2154 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f0efb2c8bebd 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d57462f2dae |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/3/testReport/ |
   | Max. process+thread count | 293 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2163: HDFS-15480. Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2163:
URL: https://github.com/apache/hadoop/pull/2163#issuecomment-661864535


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 46s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 49s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 45s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m 28s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   3m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   3m 40s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 46s |  hadoop-hdfs-project/hadoop-hdfs: 
The patch generated 4 new + 87 unchanged - 0 fixed = 91 total (was 87)  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 11s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 36s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 102m 49s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 191m 23s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor |
   |   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDFSStripedInputStream |
   |   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2163/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2163 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 17ba5119023d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d57462f2dae |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2163/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 

[GitHub] [hadoop] ayushtkn commented on pull request #2160: HDFS-15478: When Empty mount points, we are assigning fallback link to self. But it should not use full URI for target fs.

2020-07-21 Thread GitBox


ayushtkn commented on pull request #2160:
URL: https://github.com/apache/hadoop/pull/2160#issuecomment-661855928


   Thanx @umamaheswararao for the work here. The changes LGTM.
   Can we update this behavior in `ViewFsOverloadScheme.md` as well?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] touchida commented on pull request #2135: HDFS-15465. Support WebHDFS accesses to the data stored in secure Dat…

2020-07-21 Thread GitBox


touchida commented on pull request #2135:
URL: https://github.com/apache/hadoop/pull/2135#issuecomment-661825717


   @sunchao Thanks for your comment!
   > curl -i 
"http://:/webhdfs/v1/?op=OPEN==0"
   
   No, it won't work.
   It will result in `AccessControlException` with `403` response code, as 
follows.
   ```
   $ curl -i 
"http://:/webhdfs/v1/?op=OPEN==0"
   HTTP/1.1 403 Forbidden
   (omitted)
   
{"RemoteException":{"exception":"IOException","javaClassName":"java.io.IOException","message":"DestHost:destPort
 : , LocalHost:localPort :0. Failed on 
local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]"}}
   ```
   The corresponding Datanode log is as follows:
   ```
   2020-07-21 09:16:02,559 WARN org.apache.hadoop.ipc.Client: Exception 
encountered while connecting to the server : 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
   2020-07-21 09:16:02,577 WARN org.apache.hadoop.ipc.Client: Exception 
encountered while connecting to the server : 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
   2020-07-21 09:16:02,578 INFO 
org.apache.hadoop.io.retry.RetryInvocationHandler: java.io.IOException: 
DestHost:destPort : , LocalHost:localPort 
:0. Failed on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS], while invoking 
ClientNamenodeProtocolTranslatorPB.getBlockLocations over 
: after 1 failover attempts. Trying to failover after 
sleeping for 1224ms.
   (omitted)
   2020-07-21 09:18:40,881 INFO 
org.apache.hadoop.io.retry.RetryInvocationHandler: java.io.IOException: 
DestHost:destPort : , LocalHost:localPort 
:0. Failed on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS], while invoking 
ClientNamenodeProtocolTranslatorPB.getBlockLocations over 
: after 14 failover attempts. Trying to failover after 
sleeping for 20346ms.
   (omitted)
   2020-07-21 09:19:01,243 WARN org.apache.hadoop.ipc.Client: Exception 
encountered while connecting to the server : 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]
   ```
   This is because in the absence of delegation tokens, 
`org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler#channelRead0`
 will create insecure `DFSClient`, which cannot talk to secure Namenode.
   - 
https://github.com/apache/hadoop/blob/da0006f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java#L261



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-07-21 Thread Luca Canali (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161970#comment-17161970
 ] 

Luca Canali edited comment on HADOOP-16830 at 7/21/20, 12:21 PM:
-

[~ste...@apache.org] I have compiled and also briefly the PR with Spark reading 
from S3A, and the first exploration I did looks quite good to me. As mentioned 
previously, one of my goals with this is to add time-based metrics to IO 
Statistics, as in this [proof-of-concept implementation of some read time 
metrics for 
S3A|https://github.com/LucaCanali/hadoop/commit/4ed077061e5826711307941dd397250e2afc47a2].
I was wondering if it could make sense to include in this PR already a list of 
Statistics names for time-based IO instrumentation, so to guide the naming 
convention and future implementation efforts?


was (Author: lucacanali):
[~ste...@apache.org] I have compiled and also briefly the PR with Spark reading 
from S3A, and the first exploration I did looks quite good to me. As mentioned 
previously, one of my goals with this is to add time-based metrics to IO 
Statistics, as in this [proof-of-concept implementation of some read time 
metrics for 
S3A|https://github.com/LucaCanali/hadoop/commit/4ed077061e5826711307941dd397250e2afc47a2].
I was wondering if it could make sense to include in this patch already a list 
of Statistics names for time-based IO instrumentation, so to guide the naming 
convention and future implementation efforts?

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala  can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16830) Add public IOStatistics API; S3A to support

2020-07-21 Thread Luca Canali (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161970#comment-17161970
 ] 

Luca Canali commented on HADOOP-16830:
--

[~ste...@apache.org] I have compiled and also briefly the PR with Spark reading 
from S3A, and the first exploration I did looks quite good to me. As mentioned 
previously, one of my goals with this is to add time-based metrics to IO 
Statistics, as in this [proof-of-concept implementation of some read time 
metrics for 
S3A|https://github.com/LucaCanali/hadoop/commit/4ed077061e5826711307941dd397250e2afc47a2].
I was wondering if it could make sense to include in this patch already a list 
of Statistics names for time-based IO instrumentation, so to guide the naming 
convention and future implementation efforts?

> Add public IOStatistics API; S3A to support
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala  can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2123: HADOOP-17092. ABFS: Making AzureADAuthenticator.getToken() throw HttpException if a…

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2123:
URL: https://github.com/apache/hadoop/pull/2123#issuecomment-661805410


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 55s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  22m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 24s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  trunk passed  |
   | -0 :warning: |  patch  |   1m  8s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 15s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  66m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2123 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 319b05b85a5e 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d57462f2dae |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/12/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/12/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/12/testReport/ |
   | Max. process+thread count | 318 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2123/12/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | 

[jira] [Commented] (HADOOP-17106) Replace Guava Joiner with Java8 String Join

2020-07-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161953#comment-17161953
 ] 

Ahmed Hussein commented on HADOOP-17106:


Thanks [~ayushtkn],
{quote}I changed to {{StringUtils.join(',', Arrays.asList(localInterfaceAddrs)) 
+ "]");}}

and that gave me the same result.  For maps {{withKeyValueSeparator}} I think 
some wrapper would be required, maybe can add one in {{StringUtils}} only.
{quote}
Yes, I was thinking that it would be best to add missing wrappers to 
{{org.apache.hadoop.util.StringUtils}}. Then, we would have only one API 
throughout the entire code. That would enable us to do further optimizations 
and refactoring.

So, I think it would be best if I split the patch to:
 # adding wrappers to stringUtils;
 # replace guava.Joiner by the new wrappers.

> Replace Guava Joiner with Java8 String Join
> ---
>
> Key: HADOOP-17106
> URL: https://issues.apache.org/jira/browse/HADOOP-17106
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17106.001.patch
>
>
> Replace \{{com.google.common.base.Joiner}} with String.join.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Joiner' in project with mask 
> '*.java'
> Found Occurrences  (103 usages found)
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> SimpleKMSAuditLogger.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.fs  (1 usage found)
> TestPath.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.fs.s3a  (1 usage found)
> StorageStatisticsTracker.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> org.apache.hadoop.ha  (1 usage found)
> TestHAAdmin.java  (1 usage found)
> 34 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs  (8 usages found)
> DFSClient.java  (1 usage found)
> 196 import com.google.common.base.Joiner;
> DFSTestUtil.java  (1 usage found)
> 76 import com.google.common.base.Joiner;
> DFSUtil.java  (1 usage found)
> 108 import com.google.common.base.Joiner;
> DFSUtilClient.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> HAUtil.java  (1 usage found)
> 59 import com.google.common.base.Joiner;
> MiniDFSCluster.java  (1 usage found)
> 145 import com.google.common.base.Joiner;
> StripedFileTestUtil.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestDFSUpgrade.java  (1 usage found)
> 53 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocol  (1 usage found)
> LayoutFlags.java  (1 usage found)
> 26 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.protocolPB  (1 usage found)
> TestPBHelper.java  (1 usage found)
> 118 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 43 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.qjournal.client  (5 usages found)
> AsyncLoggerSet.java  (1 usage found)
> 38 import com.google.common.base.Joiner;
> QuorumCall.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> QuorumException.java  (1 usage found)
> 25 import com.google.common.base.Joiner;
> QuorumJournalManager.java  (1 usage found)
> 62 import com.google.common.base.Joiner;
> TestQuorumCall.java  (1 usage found)
> 29 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.blockmanagement  (4 usages found)
> HostSet.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> TestBlockManager.java  (1 usage found)
> 20 import com.google.common.base.Joiner;
> TestBlockReportRateLimiting.java  (1 usage found)
> 24 import com.google.common.base.Joiner;
> TestPendingDataNodeMessages.java  (1 usage found)
> 41 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.common  (1 usage found)
> StorageInfo.java  (1 usage found)
> 37 import com.google.common.base.Joiner;
> org.apache.hadoop.hdfs.server.datanode  (7 usages found)
> BlockPoolManager.java  (1 usage found)
> 32 import com.google.common.base.Joiner;
> BlockRecoveryWorker.java  (1 usage found)
> 21 import com.google.common.base.Joiner;
> 

[jira] [Commented] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier in Hadoop

2020-07-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161950#comment-17161950
 ] 

Ahmed Hussein commented on HADOOP-17100:


Thanks [~ayushtkn], I have uploaded patches for branch-3.1, branch-3.2, and 
branch-3.3

> Replace Guava Supplier with Java8+ Supplier in Hadoop
> -
>
> Key: HADOOP-17100
> URL: https://issues.apache.org/jira/browse/HADOOP-17100
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17100.001.patch, HADOOP-17100.002.patch, 
> HADOOP-17100.003.patch, HADOOP-17100.006.patch, 
> HADOOP-17100.branch-3.1.006.patch, HADOOP-17100.branch-3.2.006.patch, 
> HADOOP-17100.branch-3.3.006.patch
>
>
> Replacing Usage of {{guava.Supplier<>}} are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in Hadoop project.
>  * To make things more convenient for reviewers, I decided:
>  ** Not to replace Object instantiation by lambda expressions because this 
> will increase the patch size significantly and require code adjustments to 
> pass the checkstyle scripts.
>  ** Not to refactor the imports because this will make reading the patch more 
> difficult.
>  * Merge should be done to the following branches: trunk, branch-3.3, 
> branch-3.2, branch-3.1
> The task is straightforward because {{java.util.Supplier}} has the same API 
> as {{guava.Supplier<>}} and the vast majority of usage comes from Test-units.
>  Therefore, we need only to do the following a "one-line" change in all 147 
> files.
> {code:bash}
>  
> -import com.google.common.base.Supplier;
> +import java.util.function.Supplier;
> {code}
> The code change needs to be applied to the following list of files:
> {code:java}
>  
> Targets 
> Occurrences of 'com.google.common.base.Supplier' in project with mask 
> '*.java' 
> Found Occurrences (146 usages found) 
> org.apache.hadoop.conf (1 usage found) 
> TestReconfiguration.java (1 usage found) 
> 21 import com.google.common.base.Supplier; 
> org.apache.hadoop.crypto.key.kms.server (1 usage found) 
> TestKMS.java (1 usage found) 
> 20 import com.google.common.base.Supplier; 
> org.apache.hadoop.fs (2 usages found) 
> FCStatisticsBaseTest.java (1 usage found) 
> 40 import com.google.common.base.Supplier; 
> TestEnhancedByteBufferAccess.java (1 usage found) 
> 75 import com.google.common.base.Supplier; 
> org.apache.hadoop.fs.viewfs (1 usage found) 
> TestViewFileSystemWithTruncate.java (1 usage found) 
> 23 import com.google.common.base.Supplier; 
> org.apache.hadoop.ha (1 usage found) 
> TestZKFailoverController.java (1 usage found) 
> 25 import com.google.common.base.Supplier; 
> org.apache.hadoop.hdfs (20 usages found) 
> DFSTestUtil.java (1 usage found) 
> 79 import com.google.common.base.Supplier; 
> MiniDFSCluster.java (1 usage found) 
> 78 import com.google.common.base.Supplier; 
> TestBalancerBandwidth.java (1 usage found) 
> 29 import com.google.common.base.Supplier; 
> TestClientProtocolForPipelineRecovery.java (1 usage found) 
> 30 import com.google.common.base.Supplier; 
> TestDatanodeRegistration.java (1 usage found) 
> 44 import com.google.common.base.Supplier; 
> TestDataTransferKeepalive.java (1 usage found) 
> 47 import com.google.common.base.Supplier; 
> TestDeadNodeDetection.java (1 usage found) 
> 20 import com.google.common.base.Supplier; 
> TestDecommission.java (1 usage found) 
> 41 import com.google.common.base.Supplier; 
> TestDFSShell.java (1 usage found) 
> 37 import com.google.common.base.Supplier; 
> TestEncryptedTransfer.java (1 usage found) 
> 35 import com.google.common.base.Supplier; 
> TestEncryptionZonesWithKMS.java (1 usage found) 
> 22 import com.google.common.base.Supplier; 
> TestFileCorruption.java (1 usage found) 
> 21 import com.google.common.base.Supplier; 
> TestLeaseRecovery2.java (1 usage found) 
> 32 import com.google.common.base.Supplier; 
> TestLeaseRecoveryStriped.java (1 usage found) 
> 21 import com.google.common.base.Supplier; 
> TestMaintenanceState.java (1 usage found) 
> 63 import com.google.common.base.Supplier; 
> TestPread.java (1 usage found) 
> 61 import com.google.common.base.Supplier; 
> TestQuota.java (1 usage found) 
> 39 import com.google.common.base.Supplier; 
> TestReplaceDatanodeOnFailure.java (1 

[GitHub] [hadoop] aajisaka commented on pull request #2145: HADOOP-17133. Implement HttpServer2 metrics

2020-07-21 Thread GitBox


aajisaka commented on pull request #2145:
URL: https://github.com/apache/hadoop/pull/2145#issuecomment-661790760


   > Do you plan to add tests?
   
   I'm planning to add a regression test.
   
   > on the topic of metrics and HttpFS, look at #2069
   
   Thank you for your information. I missed that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


steveloughran commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r457977064



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class LzoCodec extends org.apache.hadoop.io.compress.LzoCodec {
+private static final Log LOG = LogFactory.getLog(LzoCodec.class);
+
+static final String gplLzoCodec = LzoCodec.class.getName();
+static final String hadoopLzoCodec = 
org.apache.hadoop.io.compress.LzoCodec.class.getName();
+static boolean warned = false;
+
+static {
+LOG.info("Bridging " + gplLzoCodec + " to " + hadoopLzoCodec + ".");

Review comment:
   with a move to slf4J you can do the string construction on demand with 
{} placemarkers

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/LzopCodec.java
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress;
+
+import io.airlift.compress.lzo.LzoCodec;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+public class LzopCodec extends io.airlift.compress.lzo.LzopCodec
+implements Configurable, CompressionCodec {
+@Override
+public Class getCompressorType()
+{
+return LzopCodec.HadoopLzopCompressor.class;
+}
+
+/**
+ * No Hadoop code seems to actually use the compressor, so just return a 
dummy one so the createOutputStream method

Review comment:
   does anyone know about downstream uses?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class LzoCodec extends org.apache.hadoop.io.compress.LzoCodec {
+private static final Log LOG = LogFactory.getLog(LzoCodec.class);
+
+static 

[GitHub] [hadoop] iwasakims commented on pull request #2155: HADOOP-17138. Fix spotbugs warnings surfaced after upgrade to 4.0.6.

2020-07-21 Thread GitBox


iwasakims commented on pull request #2155:
URL: https://github.com/apache/hadoop/pull/2155#issuecomment-661780618


   > How about fixing DLS_DEAD_LOCAL_STORE instead of ignoring the warning?
   
   Thanks, @aajisaka. It would be better to fix the code and make the diff 
smaller. I thought filtering the warning via excludeFilterFile would make the 
intent clear but it is ad hoc workaround for compiler specific issue anyway. 
JIRA should be referred for the background. I updated the patch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-21 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161402#comment-17161402
 ] 

Masatake Iwasaki edited comment on HADOOP-17138 at 7/21/20, 10:31 AM:
--

{noformat}
M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
 overrides the nullness annotation of parameter result in an incompatible way  
At ThrottledAsyncChecker.java:[lines 170-175]
{noformat}

The {{onSuccess}} in ThrottledAsyncChecker accepts null and set it as valid 
value of {{LastCheckResult.result}}. While annotating {{@Nullable}} makes sense 
here, using different Nullable (javax.annotation.Nullable vs. 
org.checkerframework.checker.nullness.qual.Nullable) generates 
NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION.
https://github.com/spotbugs/spotbugs/issues/734

I just removed Hadoop side javax.annotation.Nullable to clear the warning in 
the PR.




was (Author: iwasakims):
{noformat}
M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
 overrides the nullness annotation of parameter result in an incompatible way  
At ThrottledAsyncChecker.java:[lines 170-175]
{noformat}

The {{onSuccess}} in ThrottledAsyncChecker accepts null and set it as valid 
value of {{LastCheckResult.result}}.


> Fix spotbugs warnings surfaced after upgrade to 4.0.6
> -
>
> Key: HADOOP-17138
> URL: https://issues.apache.org/jira/browse/HADOOP-17138
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>
> Spotbugs 4.0.6 generated additional warnings.
> {noformat}
> $ find . -name findbugsXml.xml | xargs -n 1 
> /opt/spotbugs-4.0.6/bin/convertXmlToText -longBugCodes
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  
> At Server.java:[line 3729]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L5 in 
> org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  
> At Server.java:[line 3717]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
>  overrides the nullness annotation of parameter $L1 in an incompatible way  
> At DatasetVolumeChecker.java:[line 322]
> H D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At DatasetVolumeChecker.java:[lines 358-376]
> M D NP_METHOD_PARAMETER_TIGHTENS_ANNOTATION NP: Method 
> org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
>  overrides the nullness annotation of parameter result in an incompatible way 
>  At ThrottledAsyncChecker.java:[lines 170-175]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L8 in 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
>  EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 380-397]
> M D NP_PARAMETER_MUST_BE_NONNULL_BUT_MARKED_AS_NULLABLE NP: result must be 
> non-null but is marked as nullable  At LocatedFileStatusFetcher.java:[lines 
> 291-309]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to $L6 in 
> org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
> SLSRunner.java:[line 816]
> H C UMAC_UNCALLABLE_METHOD_OF_ANONYMOUS_CLASS UMAC: Uncallable method 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
>  defined in anonymous class  At 
> TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
> M D DLS_DEAD_LOCAL_STORE DLS: Dead store to entities in 
> org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
>   At TestTimelineReaderHBaseDown.java:[line 190]
> M V EI_EXPOSE_REP EI: 
> org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose 
> internal representation by returning CosNInputStream$ReadBuffer.buffer  At 
> CosNInputStream.java:[line 87]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #2163: HDFS-15480. Ordered snapshot deletion: record snapshot deletion in XAttr

2020-07-21 Thread GitBox


bshashikant opened a new pull request #2163:
URL: https://github.com/apache/hadoop/pull/2163


   
   Please see https://issues.apache.org/jira/browse/HDFS-15480
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-21 Thread GitBox


steveloughran commented on pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#issuecomment-661770848


   ok, looks good. Just remove that catch of interrupted exceptions and instead 
add that exception to the list of exceptions the test can throw



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-21 Thread GitBox


steveloughran commented on a change in pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#discussion_r457991108



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsInputStreamStatistics.java
##
@@ -285,6 +292,96 @@ public void testWithNullStreamStatistics() throws 
IOException {
 }
   }
 
+  /**
+   * Testing readAhead counters in AbfsInputStream with 30 seconds timeout.
+   */
+  @Test(timeout = TIMEOUT_30_SECONDS)
+  public void testReadAheadCounters() throws IOException {
+describe("Test to check correct values for readAhead counters in "
++ "AbfsInputStream");
+
+AzureBlobFileSystem fs = getFileSystem();
+AzureBlobFileSystemStore abfss = fs.getAbfsStore();
+Path readAheadCountersPath = path(getMethodName());
+
+/*
+ * Setting the block size for readAhead as 4KB.
+ */
+abfss.getAbfsConfiguration().setReadBufferSize(CUSTOM_BLOCK_BUFFER_SIZE);
+
+AbfsOutputStream out = null;
+AbfsInputStream in = null;
+
+try {
+
+  /*
+   * Creating a file of 1MB size.
+   */
+  out = createAbfsOutputStreamWithFlushEnabled(fs, readAheadCountersPath);
+  out.write(defBuffer);
+  out.close();
+
+  in = abfss.openFileForRead(readAheadCountersPath, fs.getFsStatistics());
+
+  /*
+   * Reading 1KB after each i * KB positions. Hence the reads are from 0
+   * to 1KB, 1KB to 2KB, and so on.. for 5 operations.
+   */
+  for (int i = 0; i < 5; i++) {
+in.seek(ONE_KB * i);
+in.read(defBuffer, ONE_KB * i, ONE_KB);
+  }
+  AbfsInputStreamStatisticsImpl stats =
+  (AbfsInputStreamStatisticsImpl) in.getStreamStatistics();
+
+  /*
+   * Since, readAhead is done in background threads. Sometimes, the
+   * threads aren't finished in the background and could result in
+   * inaccurate results. So, we wait till we have the accurate values
+   * with a limit of 30 seconds as that's when the test times out.
+   *
+   */
+  while (stats.getRemoteBytesRead() < CUSTOM_READ_AHEAD_BUFFER_SIZE
+  || stats.getReadAheadBytesRead() < CUSTOM_BLOCK_BUFFER_SIZE) {
+Thread.sleep(THREAD_SLEEP_10_SECONDS);
+  }
+
+  /*
+   * Verifying the counter values of readAheadBytesRead and 
remoteBytesRead.
+   *
+   * readAheadBytesRead : Since, we read 1KBs 5 times, that means we go
+   * from 0 to 5KB in the file. The bufferSize is set to 4KB, and since
+   * we have 8 blocks of readAhead buffer. We would have 8 blocks of 4KB
+   * buffer. Our read is till 5KB, hence readAhead would ideally read 2
+   * blocks of 4KB which is equal to 8KB. But, sometimes to get more than
+   * one block from readAhead buffer we might have to wait for background
+   * threads to fill the buffer and hence we might do remote read which
+   * would be faster. Therefore, readAheadBytesRead would be equal to or
+   * greater than 4KB.
+   *
+   * remoteBytesRead : Since, the bufferSize is set to 4KB and the number
+   * of blocks or readAheadQueueDepth is equal to 8. We would read 8 * 4
+   * KB buffer on the first read, which is equal to 32KB. But, if we are 
not
+   * able to read some bytes that were in the buffer after doing
+   * readAhead, we might use remote read again. Thus, the bytes read
+   * remotely could also be greater than 32Kb.
+   *
+   */
+  Assertions.assertThat(stats.getReadAheadBytesRead()).describedAs(
+  "Mismatch in readAheadBytesRead counter value")
+  .isGreaterThanOrEqualTo(CUSTOM_BLOCK_BUFFER_SIZE);
+
+  Assertions.assertThat(stats.getRemoteBytesRead()).describedAs(
+  "Mismatch in remoteBytesRead counter value")
+  .isGreaterThanOrEqualTo(CUSTOM_READ_AHEAD_BUFFER_SIZE);
+
+} catch (InterruptedException e) {
+  e.printStackTrace();

Review comment:
   can't we just throw this? If not, at least use LOG





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2162: HDFS-15485. Fix outdated properties of JournalNode when performing rollback

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2162:
URL: https://github.com/apache/hadoop/pull/2162#issuecomment-661766304


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 33s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  7s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 57s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   2m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   2m 54s |  hadoop-hdfs-project/hadoop-hdfs in trunk 
has 4 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 46s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   3m  0s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  94m 23s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 164m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractMultipartUploader |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.tools.TestHdfsConfigFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2162/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2162 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 020a71e51686 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d57462f2dae |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2162/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2162/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
   | javadoc | 

[GitHub] [hadoop] bshashikant commented on a change in pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-21 Thread GitBox


bshashikant commented on a change in pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156#discussion_r457978496



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -500,8 +500,13 @@
 
   public static final String DFS_NAMENODE_SNAPSHOT_MAX_LIMIT =
   "dfs.namenode.snapshot.max.limit";
-
   public static final int DFS_NAMENODE_SNAPSHOT_MAX_LIMIT_DEFAULT = 65536;
+
+  public static final String DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED =
+  "dfs.namenode.snapshot.deletion.ordered";
+  public static final boolean DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED_DEFAULT
+  = false;

Review comment:
   Addressed in https://issues.apache.org/jira/browse/HDFS-15480.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
##
@@ -353,6 +354,20 @@ public int getListLimit() {
 + " hard limit " + DFSConfigKeys.DFS_NAMENODE_MAX_XATTR_SIZE_HARD_LIMIT
 + ": (%s).", DFSConfigKeys.DFS_NAMENODE_MAX_XATTR_SIZE_KEY);
 
+this.snapshotDeletionOrdered =
+conf.getBoolean(DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED,
+DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED_DEFAULT);
+LOG.info("{} = {}", DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED,
+snapshotDeletionOrdered);
+if (snapshotDeletionOrdered && !xattrsEnabled) {

Review comment:
   https://issues.apache.org/jira/browse/HDFS-15480





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17145) Unauthenticated users are not authorized to access this page message is misleading in HttpServer2.java

2020-07-21 Thread Andras Bokor (Jira)
Andras Bokor created HADOOP-17145:
-

 Summary: Unauthenticated users are not authorized to access this 
page message is misleading in HttpServer2.java
 Key: HADOOP-17145
 URL: https://issues.apache.org/jira/browse/HADOOP-17145
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


Recently one of the users were misled by the message "Unauthenticated users are 
not authorized to access this page" when the user was not an admin user.
At that point the user is authenticated but has no admin access, so it's 
actually not an authentication issue but an authorization issue.
Also, 401 as error code would be better.
Something like "User is unauthorized to access the page" would help to users to 
find out what is the problem during access an http endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17144) Update Hadoop's lz4 to v1.9.2

2020-07-21 Thread Hemanth Boyina (Jira)
Hemanth Boyina created HADOOP-17144:
---

 Summary: Update Hadoop's lz4 to v1.9.2
 Key: HADOOP-17144
 URL: https://issues.apache.org/jira/browse/HADOOP-17144
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hemanth Boyina
Assignee: Hemanth Boyina


Update hadoop's native lz4 to v1.9.2 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17038) Support positional read in AbfsInputStream

2020-07-21 Thread Anoop Sam John (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HADOOP-17038:

Labels: HBase abfsactive  (was: abfsactive)

> Support positional read in AbfsInputStream
> --
>
> Key: HADOOP-17038
> URL: https://issues.apache.org/jira/browse/HADOOP-17038
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
>  Labels: HBase, abfsactive
>
> Right now it will do a seek to the position , read and then seek back to the 
> old position.  (As per the impl in the super class)
> In HBase kind of workloads we rely mostly on short preads. (like 64 KB size 
> by default).  So would be ideal to support a pure pos read API which will not 
> even keep the data in a buffer but will only read the required data as what 
> is asked for by the caller. (Not reading ahead more data as per the read size 
> config)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adamantal closed pull request #1729: HADOOP-16539. ABFS: Add missing query parameter for getPathStatus

2020-07-21 Thread GitBox


adamantal closed pull request #1729:
URL: https://github.com/apache/hadoop/pull/1729


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-21 Thread GitBox


mukul1987 commented on a change in pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156#discussion_r457941038



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
##
@@ -353,6 +354,20 @@ public int getListLimit() {
 + " hard limit " + DFSConfigKeys.DFS_NAMENODE_MAX_XATTR_SIZE_HARD_LIMIT
 + ": (%s).", DFSConfigKeys.DFS_NAMENODE_MAX_XATTR_SIZE_KEY);
 
+this.snapshotDeletionOrdered =
+conf.getBoolean(DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED,
+DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED_DEFAULT);
+LOG.info("{} = {}", DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED,
+snapshotDeletionOrdered);
+if (snapshotDeletionOrdered && !xattrsEnabled) {

Review comment:
   This check is not needed as all the other user of xattrs like 
encryption, erasure encoding calls
FSDirXAttrOp.unprotectedSetXAttrs(fsd, srcIIP, xattrs, flag) directly 
without doing a feature check.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -500,8 +500,13 @@
 
   public static final String DFS_NAMENODE_SNAPSHOT_MAX_LIMIT =
   "dfs.namenode.snapshot.max.limit";
-
   public static final int DFS_NAMENODE_SNAPSHOT_MAX_LIMIT_DEFAULT = 65536;
+
+  public static final String DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED =
+  "dfs.namenode.snapshot.deletion.ordered";
+  public static final boolean DFS_NAMENODE_SNAPSHOT_DELETION_ORDERED_DEFAULT
+  = false;

Review comment:
   Lets also add the value to hdfs-default.xml





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja closed pull request #2125: HADOOP-16966. ABFS: change rest version to 2019-12-12

2020-07-21 Thread GitBox


ishaniahuja closed pull request #2125:
URL: https://github.com/apache/hadoop/pull/2125


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on pull request #2125: HADOOP-16966. ABFS: change rest version to 2019-12-12

2020-07-21 Thread GitBox


ishaniahuja commented on pull request #2125:
URL: https://github.com/apache/hadoop/pull/2125#issuecomment-661717182


   closing PR as the PR will be done separately. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #2159: HADOOP-17124. Support LZO Codec using aircompressor

2020-07-21 Thread GitBox


sunchao commented on a change in pull request #2159:
URL: https://github.com/apache/hadoop/pull/2159#discussion_r457918119



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/LzopCodec.java
##
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io.compress;
+
+import io.airlift.compress.lzo.LzoCodec;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+public class LzopCodec extends io.airlift.compress.lzo.LzopCodec
+implements Configurable, CompressionCodec {
+@Override
+public Class getCompressorType()
+{
+return LzopCodec.HadoopLzopCompressor.class;

Review comment:
   I see that `createCompressor` returns `HadoopLzoCompressor()`. Should we 
keep these two in sync? 

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/com/hadoop/compression/lzo/LzoCodec.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.hadoop.compression.lzo;

Review comment:
   hmm, why we need this bridging class in Hadoop repo while the class is 
from hadoop-lzo library?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17124) Support LZO using aircompressor

2020-07-21 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161845#comment-17161845
 ] 

Chao Sun commented on HADOOP-17124:
---

Thanks [~dbtsai] . This seems to be a good addition. I'll help reviewing the 
PR. cc [~omalley] : would like to hear your opinion on this as well.

> Support LZO using aircompressor
> ---
>
> Key: HADOOP-17124
> URL: https://issues.apache.org/jira/browse/HADOOP-17124
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>
> LZO codec was removed in HADOOP-4874 because the original LZO binding is GPL 
> which is problematic. However, many legacy data is still compressed by LZO 
> codec, and companies often use vendor's GPL LZO codec in the classpath which 
> might cause GPL contamination.
> Presro and ORC-77 use [aircompressor| 
> [https://github.com/airlift/aircompressor]] (Apache V2 licensed) to compress 
> and decompress LZO data. Hadoop can add back LZO support using aircompressor 
> without GPL violation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-21 Thread Daryn Sharp (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161805#comment-17161805
 ] 

Daryn Sharp commented on HADOOP-12549:
--

 
{quote}In HDFS-7546 we added a hdfs-default.xml property to bring back the 
regular behaviour of trusting all principals (as was the case before 
HADOOP-9789).
{quote}
The default was never to trust all principals.  HDFS-7546 was a bad change that 
purported to support cross-realm.  We've used cross-realm trust w/o problems 
for as long as I've worked on hadoop.
{quote}I don't have full context on this but I'm pretty sure this change will 
be controversial.
{quote}
The pattern was added to augment the annotation-based principal restrictions.  
Let's say you have nn-ha1.domain and nn-ha2.domain fronted by nn.domain (IP 
failover setup).  The client expands the default annotation of hdfs/_HOST to 
hdfs/nn.domain causing it to reject the backend hdfs/nn-ha\{1,2}.domain 
principals.  The _optional_ pattern allows whitelisting those backend 
principals.

Non-negotiable -1.  A wildcard default is an incompatible regression that 
breaks, by shorting out, annotation based principal restrictions.  Clients will 
authenticate to any service principal.  The motivation appears to be using an 
empty configuration.  The solution is add a resource that contains security 
settings.

 

 

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Deegue opened a new pull request #2162: HDFS-15485. Fix outdated properties of JournalNode when performing rollback

2020-07-21 Thread GitBox


Deegue opened a new pull request #2162:
URL: https://github.com/apache/hadoop/pull/2162


   When rollback HDFS cluster, properties in JNStorage won't be refreshed after 
the storage dir changed. It leads to exceptions when starting NameNode.
   
   We add refresh method which will refresh properties of JournalNode after 
rollback has changed the previous dir to current.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17142) Fix outdated properties of journal node when perform rollback

2020-07-21 Thread Deegue (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deegue resolved HADOOP-17142.
-
Resolution: Abandoned

> Fix outdated properties of journal node when perform rollback
> -
>
> Key: HADOOP-17142
> URL: https://issues.apache.org/jira/browse/HADOOP-17142
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Deegue
>Priority: Minor
>
> When rollback HDFS cluster, properties in JNStorage won't be refreshed after 
> the storage dir changed. It leads to exceptions when starting namenode.
> The exception like:
> {code:java}
> 2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: 
> recoverUnfinalizedSegments failed for required journal 
> (JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, 
> 10.0.118.179:8485], stream=null))
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many 
> exceptions to achieve quorum size 2/3. 3 exceptions thrown:
> 10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory 
> /mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but 
> storage has nsId 0
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136)
>   at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
>   at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17143) Fix outdated properties of JournalNode when performing rollback

2020-07-21 Thread Deegue (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deegue resolved HADOOP-17143.
-
Resolution: Duplicate

> Fix outdated properties of JournalNode when performing rollback
> ---
>
> Key: HADOOP-17143
> URL: https://issues.apache.org/jira/browse/HADOOP-17143
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Deegue
>Priority: Minor
>
> When rollback HDFS cluster, properties in JNStorage won't be refreshed after 
> the storage dir changed. It leads to exceptions when starting namenode.
> The exception like:
> {code:java}
> 2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: 
> recoverUnfinalizedSegments failed for required journal 
> (JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, 
> 10.0.118.179:8485], stream=null))
> org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many 
> exceptions to achieve quorum size 2/3. 3 exceptions thrown:
> 10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory 
> /mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but 
> storage has nsId 0
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136)
>   at 
> org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
>   at 
> org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17143) Fix outdated properties of JournalNode when performing rollback

2020-07-21 Thread Deegue (Jira)
Deegue created HADOOP-17143:
---

 Summary: Fix outdated properties of JournalNode when performing 
rollback
 Key: HADOOP-17143
 URL: https://issues.apache.org/jira/browse/HADOOP-17143
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Deegue


When rollback HDFS cluster, properties in JNStorage won't be refreshed after 
the storage dir changed. It leads to exceptions when starting namenode.

The exception like:
{code:java}
2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] 
org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: 
recoverUnfinalizedSegments failed for required journal 
(JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, 
10.0.118.179:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions 
to achieve quorum size 2/3. 3 exceptions thrown:
10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory 
/mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but 
storage has nsId 0
at 
org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Deegue commented on pull request #2161: HADOOP-17142. Fix outdated properties of journal node when perform rollback

2020-07-21 Thread GitBox


Deegue commented on pull request #2161:
URL: https://github.com/apache/hadoop/pull/2161#issuecomment-661676839


   Sorry, move it to HDFS, close it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Deegue closed pull request #2161: HADOOP-17142. Fix outdated properties of journal node when perform rollback

2020-07-21 Thread GitBox


Deegue closed pull request #2161:
URL: https://github.com/apache/hadoop/pull/2161


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Deegue opened a new pull request #2161: HADOOP-17142. Fix outdated properties of journal node when perform rollback

2020-07-21 Thread GitBox


Deegue opened a new pull request #2161:
URL: https://github.com/apache/hadoop/pull/2161


   When rollback HDFS cluster, properties in JNStorage won't be refreshed after 
the storage dir changed. It leads to exceptions when starting NameNode.
   
   We add refresh method which will refresh properties of JournalNode after 
rollback has changed the previous dir to current.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-21 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161765#comment-17161765
 ] 

Chao Sun commented on HADOOP-12549:
---

Thanks [~elgoiri] and [~hexiaoqiao]! To give a bit context, this config 
{{dfs.namenode.kerberos.principal.pattern}} is already set to {{*}} in default 
for HDFS in {{hdfs-default.xml}} by HDFS-7546 (not HDFS-7456 in the title),. 
However, in some cases the default value is not honored:
 # some applications may only depend on hadoop-hdfs-client but not hadoop-hdfs, 
which will not use the {{hdfs-default.xml}}.
 # applications may choose to initialize a {{Configuration}} via {{new 
Configuration(false)}}, which will skip the default settings altogether.

We recently hit this issue when upgrading our routers from non-secure to 
secure. In our environment we use different Kerberos primary for router and 
hdfs, e.g., routers use principal {{router/@}} while namenodes 
use {{hdfs/@}}. When clients trying to talk to both they will 
fail with something like:
{code:java}
Failed on local exception: java.io.IOException: Couldn't set up IO streams: 
java.lang.IllegalArgumentException: Server has invalid Kerberos principal: 
router/@, expecting: hdfs/@;
{code}
it took quite some efforts for us to find out all the clients that are exposed 
to this and fix their configurations. In retrospect, this patch would have made 
things much easier.

With that said, I don't pretend to be a security expert and would like to hear 
opinions from other folks above. cc [~kihwal] also who reviewed the original 
patch of this feature.

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17142) Fix outdated properties of journal node when perform rollback

2020-07-21 Thread Deegue (Jira)
Deegue created HADOOP-17142:
---

 Summary: Fix outdated properties of journal node when perform 
rollback
 Key: HADOOP-17142
 URL: https://issues.apache.org/jira/browse/HADOOP-17142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Deegue


When rollback HDFS cluster, properties in JNStorage won't be refreshed after 
the storage dir changed. It leads to exceptions when starting namenode.

The exception like:
{code:java}
2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] 
org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: 
recoverUnfinalizedSegments failed for required journal 
(JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, 
10.0.118.179:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions 
to achieve quorum size 2/3. 3 exceptions thrown:
10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory 
/mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but 
storage has nsId 0
at 
org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12549) Extend HDFS-7456 default generically to all pattern lookups

2020-07-21 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17161749#comment-17161749
 ] 

Xiaoqiao He commented on HADOOP-12549:
--

Thanks [~csun] involve me here. I am also concerned if there is issue of auth 
amplification. cc:[~eyang], [~daryn] any suggestions?

> Extend HDFS-7456 default generically to all pattern lookups
> ---
>
> Key: HADOOP-12549
> URL: https://issues.apache.org/jira/browse/HADOOP-12549
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, security
>Affects Versions: 2.7.1
>Reporter: Harsh J
>Priority: Minor
> Attachments: HADOOP-12549.002.patch, HADOOP-12549.patch
>
>
> In HDFS-7546 we added a hdfs-default.xml property to bring back the regular 
> behaviour of trusting all principals (as was the case before HADOOP-9789). 
> However, the change only targeted HDFS users and also only those that used 
> the default-loading mechanism of Configuration class (i.e. not {{new 
> Configuration(false)}} users).
> I'd like to propose adding the same default to the generic RPC client code 
> also, so the default affects all form of clients equally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2154: HADOOP-17113. Adding ReadAhead Counters in ABFS

2020-07-21 Thread GitBox


hadoop-yetus commented on pull request #2154:
URL: https://github.com/apache/hadoop/pull/2154#issuecomment-661657516


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  1s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 55s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 52s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 25s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 48s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  60m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2154 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2211ea753785 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3833c616e08 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/2/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2154/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please 

[GitHub] [hadoop] bshashikant commented on pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-21 Thread GitBox


bshashikant commented on pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156#issuecomment-661654364


   Thanks @szetszwo for working on this. I have committed this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant merged pull request #2156: HDFS-15479. Ordered snapshot deletion: make it a configurable feature

2020-07-21 Thread GitBox


bshashikant merged pull request #2156:
URL: https://github.com/apache/hadoop/pull/2156


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org