[GitHub] [hadoop] hadoop-yetus commented on pull request #3915: HDFS-16434. Add opname to read/write lock for remaining operations

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #3915:
URL: https://github.com/apache/hadoop/pull/3915#issuecomment-1078703073


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3915/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 400 unchanged 
- 0 fixed = 402 total (was 400)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 229m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 329m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3915/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3915 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e37feabffbe9 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f3c5e66a6058dbcc51a9559dc05e0bce16f8121c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3915/5/testReport/ |
   | Max. process+thread count | 2908 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3915/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automat

[jira] [Updated] (HADOOP-18173) AWS S3 copyFromLocalOperation doesn't support single file

2022-03-24 Thread qian (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qian updated HADOOP-18173:
--
Description: 
Spark job uses aws s3 as fileSystem and calls 
{code:java}
fs.copyFromLocalFile(delSrc, overwrite, src, dest) 

delSrc = false
overwrite = true
src = 
"/Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar"
dest = 
"s3a://spark/spark-upload-a703d8e7-8dd2-4e29-beca-b4df2fedefbd/spark-examples_2.12-3.4.0-SNAPSHOT.jar"{code}
Then throw a PathIOException, message is as follow
{code:java}
Exception in thread "main" org.apache.spark.SparkException: Uploading file 
/Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar
 failed...
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:332)

at 
org.apache.spark.deploy.k8s.KubernetesUtils$.$anonfun$uploadAndTransformFileUris$1(KubernetesUtils.scala:277)

at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)   
 
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
   
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:286)
at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadAndTransformFileUris(KubernetesUtils.scala:275)

at 
org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.$anonfun$getAdditionalPodSystemProperties$1(BasicDriverFeatureStep.scala:187)
   
at scala.collection.immutable.List.foreach(List.scala:431)
at 
org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.getAdditionalPodSystemProperties(BasicDriverFeatureStep.scala:178)
at 
org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.$anonfun$buildFromFeatures$5(KubernetesDriverBuilder.scala:86)
at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)  
  
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)  
  
at scala.collection.immutable.List.foldLeft(List.scala:91)
at 
org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.buildFromFeatures(KubernetesDriverBuilder.scala:84)

at 
org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:104)

at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5(KubernetesClientApplication.scala:248)

at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5$adapted(KubernetesClientApplication.scala:242)
 
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2738)
at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:242)

at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:214)

at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)

at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)   
 
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046) 
   
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
org.apache.spark.SparkException: Error uploading file 
spark-examples_2.12-3.4.0-SNAPSHOT.jar
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileToHadoopCompatibleFS(KubernetesUtils.scala:355)

at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:328)

... 30 more
Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path for 
URI:file:///Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar':
 Input/output error 
at 
apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:365)

at 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:226)

at 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:170)

at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$25(S3AFileSystem.java:3920)

at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
at 
org.apache.hadoop.fs.statistics.impl.IOStatistics

[GitHub] [hadoop] jojochuang commented on pull request #4104: HDFS-16520. Improve EC pread: avoid potential reading whole block

2022-03-24 Thread GitBox


jojochuang commented on pull request #4104:
URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1078594592


   This looks like a great improvement. @sodonnel ,  @umamaheswararao , @ferhui 
@tasanuma you guys may be interested


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #4082: HDFS-16507. [SBN read] Avoid purging edit log which is in progress

2022-03-24 Thread GitBox


tomscut commented on a change in pull request #4082:
URL: https://github.com/apache/hadoop/pull/4082#discussion_r834896270



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
##
@@ -1509,13 +1509,18 @@ synchronized void abortCurrentLogSegment() {
* effect.
*/
   @Override
-  public synchronized void purgeLogsOlderThan(final long minTxIdToKeep) {
+  public synchronized void purgeLogsOlderThan(long minTxIdToKeep) {
 // Should not purge logs unless they are open for write.
 // This prevents the SBN from purging logs on shared storage, for example.
 if (!isOpenForWrite()) {
   return;
 }
-
+
+// Reset purgeLogsFrom to avoid purging edit log which is in progress.
+if (isSegmentOpen()) {
+  minTxIdToKeep = minTxIdToKeep > curSegmentTxId ? curSegmentTxId : 
minTxIdToKeep;

Review comment:
   Hi @jojochuang @Hexiaoqiao @ayushtkn , please also take a look. Thank 
you very much.
   
   This problem begin from inprogress edits tail. And this issue 
[HDFS-14317](https://issues.apache.org/jira/browse/HDFS-14317) does a good job 
of avoiding this problem.
   
   However, if SNN's rolledit operation is disabled accidentally by 
configuration, and ANN's automatic roll period is very long, then edit log 
which is in progress may also be purged.
   
   Although we add assertions, assertion is generally disabled in a 
production(we don't normally add `-ea` to JVM parameters). This problem and the 
logs also prove that we are not strictly ensure`(inTxIdToKeep <= 
curSegmentTxId)`. So it is dangerous for NameNode.  It may crash the active 
NameNode, because of "No log file to Finalize at Transaction ID xxx".
   ```
   2022-03-15 17:28:52,867 FATAL namenode.FSEditLog 
(JournalSet.java:mapJournalsAndReportErrors(393)) - Error: finalize log segment 
24207987, 27990692 failed for required journal (JournalAndStream(mgr=QJM
to [xxx:8485, xxx:8485, xxx:8485, xxx:8485, xxx:8485], stream=null))
   org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many 
exceptions to achieve quorum size 3/5. 5 exceptions thrown:
   10.152.124.157:8485: No log file to finalize at transaction ID 24207987 ; 
journal id: ambari-test
   at 
org.apache.hadoop.hdfs.qjournal.server.Journal.finalizeLogSegment(Journal.java:656)
   at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.finalizeLogSegment(JournalNodeRpcServer.java:210)
   at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.finalizeLogSegment(QJournalProtocolServerSideTranslatorPB.java:205)
   at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:28890)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:550)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1094)
   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1066)
   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:422)
   at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2989) 
   ```
   
   We should reset `minTxIdToKeep` to ensure that the in progress edit log is 
not purged very strict. And wait for ANN to automatically roll to finalize the 
edit log. Then, after checkpoint, ANN automatically purged the finalized 
editlog(See the stack mentioned above).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #4082: HDFS-16507. [SBN read] Avoid purging edit log which is in progress

2022-03-24 Thread GitBox


tomscut commented on a change in pull request #4082:
URL: https://github.com/apache/hadoop/pull/4082#discussion_r834896270



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
##
@@ -1509,13 +1509,18 @@ synchronized void abortCurrentLogSegment() {
* effect.
*/
   @Override
-  public synchronized void purgeLogsOlderThan(final long minTxIdToKeep) {
+  public synchronized void purgeLogsOlderThan(long minTxIdToKeep) {
 // Should not purge logs unless they are open for write.
 // This prevents the SBN from purging logs on shared storage, for example.
 if (!isOpenForWrite()) {
   return;
 }
-
+
+// Reset purgeLogsFrom to avoid purging edit log which is in progress.
+if (isSegmentOpen()) {
+  minTxIdToKeep = minTxIdToKeep > curSegmentTxId ? curSegmentTxId : 
minTxIdToKeep;

Review comment:
   Hi @jojochuang @Hexiaoqiao @ayushtkn , please also take a look. Thank 
you very much.
   
   This problem begin from inprogress edits tail. And this issue 
[HDFS-14317](https://issues.apache.org/jira/browse/HDFS-14317) does a good job 
of avoiding this problem.
   
   However, if SNN's rolledit operation is disabled accidentally by 
configuration, and ANN's automatic roll period is very long, then edit log 
which is in progress may also be purged.
   
   Although we add assertions, assertion is generally disabled in a 
production(we don't normally add `-ea` to JVM parameters). This problem and the 
logs also proves that we are not strictly ensure`(inTxIdToKeep <= 
curSegmentTxId)`. So it is dangerous for NameNode. 
   
   We should reset `minTxIdToKeep` to ensure that the in progress edit log is 
not purged very strict. And wait for ANN to automatically roll to finalize the 
edit log. Then, after checkpoint, ANN automatically purged the finalized 
editlog(See the stack mentioned above).

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
##
@@ -1509,13 +1509,18 @@ synchronized void abortCurrentLogSegment() {
* effect.
*/
   @Override
-  public synchronized void purgeLogsOlderThan(final long minTxIdToKeep) {
+  public synchronized void purgeLogsOlderThan(long minTxIdToKeep) {
 // Should not purge logs unless they are open for write.
 // This prevents the SBN from purging logs on shared storage, for example.
 if (!isOpenForWrite()) {
   return;
 }
-
+
+// Reset purgeLogsFrom to avoid purging edit log which is in progress.
+if (isSegmentOpen()) {
+  minTxIdToKeep = minTxIdToKeep > curSegmentTxId ? curSegmentTxId : 
minTxIdToKeep;

Review comment:
   Hi @jojochuang @Hexiaoqiao @ayushtkn , please also take a look. Thank 
you very much.
   
   This problem begin from inprogress edits tail. And this issue 
[HDFS-14317](https://issues.apache.org/jira/browse/HDFS-14317) does a good job 
of avoiding this problem.
   
   However, if SNN's rolledit operation is disabled accidentally by 
configuration, and ANN's automatic roll period is very long, then edit log 
which is in progress may also be purged.
   
   Although we add assertions, assertion is generally disabled in a 
production(we don't normally add `-ea` to JVM parameters). This problem and the 
logs also prove that we are not strictly ensure`(inTxIdToKeep <= 
curSegmentTxId)`. So it is dangerous for NameNode. 
   
   We should reset `minTxIdToKeep` to ensure that the in progress edit log is 
not purged very strict. And wait for ANN to automatically roll to finalize the 
edit log. Then, after checkpoint, ANN automatically purged the finalized 
editlog(See the stack mentioned above).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18173) AWS S3 copyFromLocalOperation doesn't support single file

2022-03-24 Thread qian (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17512156#comment-17512156
 ] 

qian commented on HADOOP-18173:
---

[~bogthe] Hi. Could u help me about this case?

> AWS S3 copyFromLocalOperation doesn't support single file
> -
>
> Key: HADOOP-18173
> URL: https://issues.apache.org/jira/browse/HADOOP-18173
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.2
> Environment: Hadoop version 3.3.2
> Spark version 3.4.0-SNAPSHOT
> use minio:latest to mock S3 filesystem
>  
>Reporter: qian
>Priority: Major
>
> Spark job uses aws s3 as fileSystem and calls 
> {code:java}
> fs.copyFromLocalFile(delSrc, overwrite, src, dest) 
> delSrc = false
> overwrite = true
> src = 
> "/Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar"
> dest = 
> "s3a://spark/spark-upload-a703d8e7-8dd2-4e29-beca-b4df2fedefbd/spark-examples_2.12-3.4.0-SNAPSHOT.jar"{code}
> Then throw a PathIOException, message is as follow
> {code:java}
> Exception in thread "main" org.apache.spark.SparkException: Uploading file 
> /Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar
>  failed...
> at 
> org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:332)
> 
> at 
> org.apache.spark.deploy.k8s.KubernetesUtils$.$anonfun$uploadAndTransformFileUris$1(KubernetesUtils.scala:277)
> 
> at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286) 
>
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)   
>  
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)  
>   
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at scala.collection.TraversableLike.map(TraversableLike.scala:286)
> at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
> at scala.collection.AbstractTraversable.map(Traversable.scala:108)
> at 
> org.apache.spark.deploy.k8s.KubernetesUtils$.uploadAndTransformFileUris(KubernetesUtils.scala:275)
> 
> at 
> org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.$anonfun$getAdditionalPodSystemProperties$1(BasicDriverFeatureStep.scala:187)
>
> at scala.collection.immutable.List.foreach(List.scala:431)
> at 
> org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.getAdditionalPodSystemProperties(BasicDriverFeatureStep.scala:178)
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.$anonfun$buildFromFeatures$5(KubernetesDriverBuilder.scala:86)
> at 
> scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
> 
> at 
> scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)   
>  
> at scala.collection.immutable.List.foldLeft(List.scala:91)
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.buildFromFeatures(KubernetesDriverBuilder.scala:84)
> 
> at 
> org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:104)
> 
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5(KubernetesClientApplication.scala:248)
> 
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5$adapted(KubernetesClientApplication.scala:242)
> at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2738) 
>
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:242)
> 
> at 
> org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:214)
> 
> at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
> 
> at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180) 
>
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
> at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)  
>   
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
> org.apache.spark.SparkException: Error uploading file 
> spark-examples_2.12-3.4.0-SNAPSHOT.jar
> at 
> org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileToHadoopCompatibleFS(KubernetesUtils.scala:355)
> 
> at 
> org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:328)
> 
> ... 30 more
> Caused by: org.apache.hadoop.fs.PathIOException:

[jira] [Updated] (HADOOP-18173) AWS S3 copyFromLocalOperation doesn't support single file

2022-03-24 Thread qian (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

qian updated HADOOP-18173:
--
Description: 
Spark job uses aws s3 as fileSystem and calls 
{code:java}
fs.copyFromLocalFile(delSrc, overwrite, src, dest) 

delSrc = false
overwrite = true
src = 
"/Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar"
dest = 
"s3a://spark/spark-upload-a703d8e7-8dd2-4e29-beca-b4df2fedefbd/spark-examples_2.12-3.4.0-SNAPSHOT.jar"{code}
Then throw a PathIOException, message is as follow
{code:java}
Exception in thread "main" org.apache.spark.SparkException: Uploading file 
/Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar
 failed...
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:332)

at 
org.apache.spark.deploy.k8s.KubernetesUtils$.$anonfun$uploadAndTransformFileUris$1(KubernetesUtils.scala:277)

at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)   
 
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
   
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:286)
at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadAndTransformFileUris(KubernetesUtils.scala:275)

at 
org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.$anonfun$getAdditionalPodSystemProperties$1(BasicDriverFeatureStep.scala:187)
   
at scala.collection.immutable.List.foreach(List.scala:431)
at 
org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.getAdditionalPodSystemProperties(BasicDriverFeatureStep.scala:178)
at 
org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.$anonfun$buildFromFeatures$5(KubernetesDriverBuilder.scala:86)
at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)  
  
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)  
  
at scala.collection.immutable.List.foldLeft(List.scala:91)
at 
org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.buildFromFeatures(KubernetesDriverBuilder.scala:84)

at 
org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:104)

at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5(KubernetesClientApplication.scala:248)

at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5$adapted(KubernetesClientApplication.scala:242)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2738)   
 
at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:242)

at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:214)

at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)

at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)   
 
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046) 
   
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
org.apache.spark.SparkException: Error uploading file 
spark-examples_2.12-3.4.0-SNAPSHOT.jar
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileToHadoopCompatibleFS(KubernetesUtils.scala:355)

at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:328)

... 30 more
Caused by: org.apache.hadoop.fs.PathIOException: `Cannot get relative path for 
URI:file:///Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar':
 Input/output errorat 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:365)

at 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:226)

at 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:170)

at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$copyFromLocalFile$25(S3AFileSystem.java:3920)

at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
at 
org.apache.hadoop.fs.sta

[jira] [Created] (HADOOP-18173) AWS S3 copyFromLocalOperation doesn't support single file

2022-03-24 Thread qian (Jira)
qian created HADOOP-18173:
-

 Summary: AWS S3 copyFromLocalOperation doesn't support single file
 Key: HADOOP-18173
 URL: https://issues.apache.org/jira/browse/HADOOP-18173
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.3.2
 Environment: Hadoop version 3.3.2

Spark version 3.4.0-SNAPSHOT

use minio:latest to mock S3 filesystem

 
Reporter: qian


Spark job uses aws s3 as fileSystem and calls 
{code:java}
fs.copyFromLocalFile(delSrc, overwrite, src, dest) 

delSrc = false
overwrite = true
src = 
"/Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar"
dest = 
"s3a://spark/spark-upload-a703d8e7-8dd2-4e29-beca-b4df2fedefbd/spark-examples_2.12-3.4.0-SNAPSHOT.jar"{code}
Then throw a PathIOException, message is as follow
{code:java}
Exception in thread "main" org.apache.spark.SparkException: Uploading file 
/Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar
 failed...at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:332)
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.$anonfun$uploadAndTransformFileUris$1(KubernetesUtils.scala:277)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)  
  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)   
 at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)   
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:286)at 
scala.collection.TraversableLike.map$(TraversableLike.scala:279)at 
scala.collection.AbstractTraversable.map(Traversable.scala:108)at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadAndTransformFileUris(KubernetesUtils.scala:275)
at 
org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.$anonfun$getAdditionalPodSystemProperties$1(BasicDriverFeatureStep.scala:187)
at scala.collection.immutable.List.foreach(List.scala:431)at 
org.apache.spark.deploy.k8s.features.BasicDriverFeatureStep.getAdditionalPodSystemProperties(BasicDriverFeatureStep.scala:178)
at 
org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.$anonfun$buildFromFeatures$5(KubernetesDriverBuilder.scala:86)
at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)  
  at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122) 
   at scala.collection.immutable.List.foldLeft(List.scala:91)at 
org.apache.spark.deploy.k8s.submit.KubernetesDriverBuilder.buildFromFeatures(KubernetesDriverBuilder.scala:84)
at 
org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:104)
at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5(KubernetesClientApplication.scala:248)
at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$5$adapted(KubernetesClientApplication.scala:242)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2738)   
 at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:242)
at 
org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:214)
at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)Caused by: 
org.apache.spark.SparkException: Error uploading file 
spark-examples_2.12-3.4.0-SNAPSHOT.jarat 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileToHadoopCompatibleFS(KubernetesUtils.scala:355)
at 
org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:328)
... 30 moreCaused by: org.apache.hadoop.fs.PathIOException: `Cannot get 
relative path for 
URI:file:///Users/hengzhen.sq/IdeaProjects/spark/dist/examples/jars/spark-examples_2.12-3.4.0-SNAPSHOT.jar':
 Input/output errorat 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.getFinalPath(CopyFromLocalOperation.java:365)
at 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.uploadSourceFromFS(CopyFromLocalOperation.java:226)
at 
org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation.execute(CopyFromLocalOperation.java:170)
at 
org.apa

[GitHub] [hadoop] tomscut commented on a change in pull request #4082: HDFS-16507. [SBN read] Avoid purging edit log which is in progress

2022-03-24 Thread GitBox


tomscut commented on a change in pull request #4082:
URL: https://github.com/apache/hadoop/pull/4082#discussion_r834896270



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
##
@@ -1509,13 +1509,18 @@ synchronized void abortCurrentLogSegment() {
* effect.
*/
   @Override
-  public synchronized void purgeLogsOlderThan(final long minTxIdToKeep) {
+  public synchronized void purgeLogsOlderThan(long minTxIdToKeep) {
 // Should not purge logs unless they are open for write.
 // This prevents the SBN from purging logs on shared storage, for example.
 if (!isOpenForWrite()) {
   return;
 }
-
+
+// Reset purgeLogsFrom to avoid purging edit log which is in progress.
+if (isSegmentOpen()) {
+  minTxIdToKeep = minTxIdToKeep > curSegmentTxId ? curSegmentTxId : 
minTxIdToKeep;

Review comment:
   Hi @jojochuang @Hexiaoqiao @ayushtkn , please also take a look. Thank 
you very much.
   
   This problem begin from inprogress edits tail. And this issue 
[HDFS-14317](https://issues.apache.org/jira/browse/HDFS-14317) does a good job 
of avoiding this problem.
   
   However, if SNN's rolledit operation is disabled accidentally by 
configuration, and ANN's automatic roll period is very long, then edit log 
which is in progress may also be purged.
   
   Although we add assertions, assertion is generally disabled in a 
production(we don't normally add `-ea` to JVM parameters). This bug also proves 
that we are not strictly ensure`(inTxIdToKeep <= curSegmentTxId)`. So it is 
dangerous for NameNode. 
   
   We should reset `minTxIdToKeep` to ensure that the in progress edit log is 
not purged very strict. And wait for ANN to automatically roll to finalize the 
edit log. Then, after checkpoint, ANN automatically purged the finalized 
editlog(See the stack mentioned above).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #4082: HDFS-16507. [SBN read] Avoid purging edit log which is in progress

2022-03-24 Thread GitBox


tomscut commented on a change in pull request #4082:
URL: https://github.com/apache/hadoop/pull/4082#discussion_r834896270



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
##
@@ -1509,13 +1509,18 @@ synchronized void abortCurrentLogSegment() {
* effect.
*/
   @Override
-  public synchronized void purgeLogsOlderThan(final long minTxIdToKeep) {
+  public synchronized void purgeLogsOlderThan(long minTxIdToKeep) {
 // Should not purge logs unless they are open for write.
 // This prevents the SBN from purging logs on shared storage, for example.
 if (!isOpenForWrite()) {
   return;
 }
-
+
+// Reset purgeLogsFrom to avoid purging edit log which is in progress.
+if (isSegmentOpen()) {
+  minTxIdToKeep = minTxIdToKeep > curSegmentTxId ? curSegmentTxId : 
minTxIdToKeep;

Review comment:
   Hi @jojochuang @Hexiaoqiao @ayushtkn , please also take a look. Thank 
you very much.
   
   This problem begin from inprogress edits tail. And this issue 
[HDFS-14317](https://issues.apache.org/jira/browse/HDFS-14317) does a good job 
of avoiding this problem.
   
   However, if SNN's rolledit operation is disabled accidentally by 
configuration, and ANN's automatic roll period is very long, then edit log 
which is in progress may also be purged.
   
   Although we add assertions, assertion is generally disabled in a production. 
This bug also proves that we are not strictly ensure`(inTxIdToKeep <= 
curSegmentTxId)`. So it is dangerous for NameNode. 
   
   We should reset `minTxIdToKeep` to ensure that the in progress edit log is 
not purged very strict. And wait for ANN to automatically roll to finalize the 
edit log. Then, after checkpoint, ANN automatically purged the finalized 
editlog(See the stack mentioned above).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #4082: HDFS-16507. [SBN read] Avoid purging edit log which is in progress

2022-03-24 Thread GitBox


tomscut commented on a change in pull request #4082:
URL: https://github.com/apache/hadoop/pull/4082#discussion_r834896270



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
##
@@ -1509,13 +1509,18 @@ synchronized void abortCurrentLogSegment() {
* effect.
*/
   @Override
-  public synchronized void purgeLogsOlderThan(final long minTxIdToKeep) {
+  public synchronized void purgeLogsOlderThan(long minTxIdToKeep) {
 // Should not purge logs unless they are open for write.
 // This prevents the SBN from purging logs on shared storage, for example.
 if (!isOpenForWrite()) {
   return;
 }
-
+
+// Reset purgeLogsFrom to avoid purging edit log which is in progress.
+if (isSegmentOpen()) {
+  minTxIdToKeep = minTxIdToKeep > curSegmentTxId ? curSegmentTxId : 
minTxIdToKeep;

Review comment:
   This problem begin from inprogress edits tail. And this issue 
[HDFS-14317](https://issues.apache.org/jira/browse/HDFS-14317) does a good job 
of avoiding this problem.
   
   However, if SNN's rolledit operation is disabled accidentally by 
configuration, and ANN's automatic roll period is very long, then edit log 
which is in progress may also be purged.
   
   Although we add assertions, assertion is generally disabled in a production. 
This bug also proves that we are not strictly ensure`(inTxIdToKeep <= 
curSegmentTxId)`. So it is dangerous for NameNode. 
   
   We should reset `minTxIdToKeep` to ensure that the in progress edit log is 
not purged very strict. And wait for ANN to automatically roll to finalize the 
edit log. Then, after checkpoint, ANN automatically purged the finalized 
editlog(See the stack mentioned above).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4106: Changed scope for isRootInternalDir/getRootFallbackLink for InodeTree

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4106:
URL: https://github.com/apache/hadoop/pull/4106#issuecomment-1078573267


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  23m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 46s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 209m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4106/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4106 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 185873fa492a 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fb630659cd20efa3595e55a7f36919abdb14413d |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4106/1/testReport/ |
   | Max. process+thread count | 2752 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4106/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For quer

[GitHub] [hadoop] tomscut commented on pull request #4001: HDFS-16460. [SPS]: Handle failure retries for moving tasks

2022-03-24 Thread GitBox


tomscut commented on pull request #4001:
URL: https://github.com/apache/hadoop/pull/4001#issuecomment-1078572001


   Hi @umamaheswararao @tasanuma @Hexiaoqiao , could you please take a look? 
Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut edited a comment on pull request #4001: HDFS-16460. [SPS]: Handle failure retries for moving tasks

2022-03-24 Thread GitBox


tomscut edited a comment on pull request #4001:
URL: https://github.com/apache/hadoop/pull/4001#issuecomment-1078572001


   Hi @umamaheswararao @tasanuma @Hexiaoqiao @ferhui , could you please take a 
look? Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut removed a comment on pull request #4001: HDFS-16460. [SPS]: Handle failure retries for moving tasks

2022-03-24 Thread GitBox


tomscut removed a comment on pull request #4001:
URL: https://github.com/apache/hadoop/pull/4001#issuecomment-1048427499


   Hi @jojochuang @tasanuma @Hexiaoqiao @ferhui , PTAL. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease

2022-03-24 Thread GitBox


tomscut commented on a change in pull request #4057:
URL: https://github.com/apache/hadoop/pull/4057#discussion_r834874623



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -2751,6 +2751,13 @@ public boolean checkBlockReportLease(BlockReportContext 
context,
   return true;
 }
 DatanodeDescriptor node = datanodeManager.getDatanode(nodeID);
+if (node == null) {
+  final UnregisteredNodeException e = new 
UnregisteredNodeException(nodeID, null);
+  NameNode.stateChangeLog.error("BLOCK* NameSystem.getDatanode: " + "Data 
node " + nodeID +

Review comment:
   > I am confused if we need log here which upper method also do that when 
meet UnregisteredNodeException. Although it is DEBU level now, we could improve 
that to WARN, right?
   
   Thanks @Hexiaoqiao for your review. I will wait for @ayushtkn reply and 
decide whether to update again.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3915: HDFS-16434. Add opname to read/write lock for remaining operations

2022-03-24 Thread GitBox


tomscut commented on pull request #3915:
URL: https://github.com/apache/hadoop/pull/3915#issuecomment-1078553976


   > #4064 added a new `namesystem.writeUnlock()`. Could you add the opname 
too? The others look good to me.
   
   Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on a change in pull request #3915: HDFS-16434. Add opname to read/write lock for remaining operations

2022-03-24 Thread GitBox


tomscut commented on a change in pull request #3915:
URL: https://github.com/apache/hadoop/pull/3915#discussion_r834869705



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java
##
@@ -158,7 +158,7 @@ public void run() {
   LOG.warn("DatanodeAdminMonitor caught exception when processing node.",
   e);
 } finally {
-  namesystem.writeUnlock();
+  namesystem.writeUnlock("datanodeAdminMonitorThread");

Review comment:
   Thanks @tasanuma for your review. I fixed it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?focusedWorklogId=747489&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747489
 ]

ASF GitHub Bot logged work on HADOOP-18167:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 22:14
Start Date: 24/Mar/22 22:14
Worklog Time Spent: 10m 
  Work Description: simbadzina commented on a change in pull request #4092:
URL: https://github.com/apache/hadoop/pull/4092#discussion_r834774762



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -825,4 +860,55 @@ protected void syncTokenOwnerStats() {
   addTokenForOwnerStats(id);
 }
   }
+
+  /**
+   * DelegationTokenSecretManagerMetrics tracks token management operations
+   * and publishes them through the metrics interfaces.
+   */
+  @Metrics(about="Delegation token secret manager metrics", context="token")
+  static class DelegationTokenSecretManagerMetrics implements 
IOStatisticsSource {
+final static String STORE_TOKEN_STAT = "storeToken";
+final static String UPDATE_TOKEN_STAT = "updateToken";
+final static String REMOVE_TOKEN_STAT = "removeToken";
+
+final MetricsRegistry registry = new 
MetricsRegistry("DelegationTokenSecretManagerMetrics");

Review comment:
   I see. A minor change would be to add "LOG.debug("Initialized {}", 
registry);" like in ClientSCMMetrics just so the IDE and potential our style 
check steps in testing don't fail.
   
   The creation of the registry can also be part of the constructor.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747489)
Time Spent: 1h 20m  (was: 1h 10m)

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a change in pull request #4092: HADOOP-18167. Add metrics to track delegation token secret manager op…

2022-03-24 Thread GitBox


simbadzina commented on a change in pull request #4092:
URL: https://github.com/apache/hadoop/pull/4092#discussion_r834774762



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -825,4 +860,55 @@ protected void syncTokenOwnerStats() {
   addTokenForOwnerStats(id);
 }
   }
+
+  /**
+   * DelegationTokenSecretManagerMetrics tracks token management operations
+   * and publishes them through the metrics interfaces.
+   */
+  @Metrics(about="Delegation token secret manager metrics", context="token")
+  static class DelegationTokenSecretManagerMetrics implements 
IOStatisticsSource {
+final static String STORE_TOKEN_STAT = "storeToken";
+final static String UPDATE_TOKEN_STAT = "updateToken";
+final static String REMOVE_TOKEN_STAT = "removeToken";
+
+final MetricsRegistry registry = new 
MetricsRegistry("DelegationTokenSecretManagerMetrics");

Review comment:
   I see. A minor change would be to add "LOG.debug("Initialized {}", 
registry);" like in ClientSCMMetrics just so the IDE and potential our style 
check steps in testing don't fail.
   
   The creation of the registry can also be part of the constructor.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?focusedWorklogId=747451&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747451
 ]

ASF GitHub Bot logged work on HADOOP-18167:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 21:00
Start Date: 24/Mar/22 21:00
Worklog Time Spent: 10m 
  Work Description: omalley commented on a change in pull request #4092:
URL: https://github.com/apache/hadoop/pull/4092#discussion_r834722208



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -429,11 +445,16 @@ private synchronized void removeExpiredKeys() {
 byte[] password = createPassword(identifier.getBytes(), 
currentKey.getKey());
 DelegationTokenInformation tokenInfo = new DelegationTokenInformation(now
 + tokenRenewInterval, password, getTrackingIdIfEnabled(identifier));
+long start = Time.monotonicNow();
 try {
   storeToken(identifier, tokenInfo);
 } catch (IOException ioe) {
   LOG.error("Could not store token " + formatTokenId(identifier) + "!!",
   ioe);
+} finally {
+  if (metrics != null) {
+metrics.addStoreToken(Time.monotonicNow() - start);
+  }

Review comment:
   I'd be happier if the addStoreToken was only updated when it was 
successful. We should track failures separately with a counter. (I don't think 
we need to track the operation that had the exception, just track the count of 
exceptions that happened getting delegation tokens.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747451)
Time Spent: 1h 10m  (was: 1h)

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] omalley commented on a change in pull request #4092: HADOOP-18167. Add metrics to track delegation token secret manager op…

2022-03-24 Thread GitBox


omalley commented on a change in pull request #4092:
URL: https://github.com/apache/hadoop/pull/4092#discussion_r834722208



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java
##
@@ -429,11 +445,16 @@ private synchronized void removeExpiredKeys() {
 byte[] password = createPassword(identifier.getBytes(), 
currentKey.getKey());
 DelegationTokenInformation tokenInfo = new DelegationTokenInformation(now
 + tokenRenewInterval, password, getTrackingIdIfEnabled(identifier));
+long start = Time.monotonicNow();
 try {
   storeToken(identifier, tokenInfo);
 } catch (IOException ioe) {
   LOG.error("Could not store token " + formatTokenId(identifier) + "!!",
   ioe);
+} finally {
+  if (metrics != null) {
+metrics.addStoreToken(Time.monotonicNow() - start);
+  }

Review comment:
   I'd be happier if the addStoreToken was only updated when it was 
successful. We should track failures separately with a counter. (I don't think 
we need to track the operation that had the exception, just track the count of 
exceptions that happened getting delegation tokens.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] omalley commented on pull request #4100: HDFS-16518: Close cached KeyProvider when DFSClient is closed

2022-03-24 Thread GitBox


omalley commented on pull request #4100:
URL: https://github.com/apache/hadoop/pull/4100#issuecomment-1078244353


   I commented on the jira, but:
   
   I don't understand why this is required. Obviously at jvm shutdown the cache 
will be discarded. The order of shutdown hooks isn't deterministic, so using 
this isn't a fix against other shutdown hooks using the cache.
   
   Is there some other call to KeyProvider.close() that this should replace?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18172) Change scope of getRootFallbackLink for InodeTree to make them accessible from outside package

2022-03-24 Thread Xing Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xing Lin updated HADOOP-18172:
--
Description: Sometimes, we need to access rootFallBackLink from another 
package. We should make them public, similar as what we did for 
getMountPoints() in HADOOP-18100. 

> Change scope of getRootFallbackLink for InodeTree to make them accessible 
> from outside package
> --
>
> Key: HADOOP-18172
> URL: https://issues.apache.org/jira/browse/HADOOP-18172
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>
> Sometimes, we need to access rootFallBackLink from another package. We should 
> make them public, similar as what we did for getMountPoints() in 
> HADOOP-18100. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18172) Change scope of getRootFallbackLink for InodeTree to make them accessible from outside package

2022-03-24 Thread Xing Lin (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xing Lin reassigned HADOOP-18172:
-

Assignee: Xing Lin

> Change scope of getRootFallbackLink for InodeTree to make them accessible 
> from outside package
> --
>
> Key: HADOOP-18172
> URL: https://issues.apache.org/jira/browse/HADOOP-18172
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18172) Change scope of getRootFallbackLink for InodeTree to make them accessible from outside package

2022-03-24 Thread Xing Lin (Jira)
Xing Lin created HADOOP-18172:
-

 Summary: Change scope of getRootFallbackLink for InodeTree to make 
them accessible from outside package
 Key: HADOOP-18172
 URL: https://issues.apache.org/jira/browse/HADOOP-18172
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Xing Lin






--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4105: YARN-11088. Introduce the config to control the AM allocated to non-e…

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4105:
URL: https://github.com/apache/hadoop/pull/4105#issuecomment-1077969608


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   8m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   9m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   8m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m  1s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 10s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4105/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in the patch passed.  |
   | +1 :green_heart: |  unit  |   4m 59s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 116m 13s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 277m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4105/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4105 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 340326807f2a 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5fa6d52b293e8d3eacfa798e3f8642fd470d5a93 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Resul

[jira] [Resolved] (HADOOP-18171) NameNode Access Time Precision

2022-03-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-18171.

Resolution: Invalid

Setting the value of 0 disables access times for HDFS. You should ask 
CDP-related question to Cloudera support instead of filing this issue.

> NameNode Access Time Precision
> --
>
> Key: HADOOP-18171
> URL: https://issues.apache.org/jira/browse/HADOOP-18171
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: As of now we are on CDH version 6.3.4 and we are 
> planning to upgrade it to CDP version 7.1.4. for that cloudera want us to 
> disable namenode property dfs.access.time.precision by changing it's value to 
> 0. Current value for this property is 1 hour. so my question is that how this 
> value is impacting in current scenario? what is the effect of that and what 
> will happen If I make it to zero.
>Reporter: Doug
>Priority: Major
> Attachments: namenodeaccesstime.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4101: HDFS-16519. Add throttler to EC reconstruction

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4101:
URL: https://github.com/apache/hadoop/pull/4101#issuecomment-1077857491


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 227m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4101/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 329m 33s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.TestHdfsConfigFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4101/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4101 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a87865405e9f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 74649859eca981cbe1645a4c4f327a34bdddcd8b |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4101/2/testReport/ |
   | Max. process+thread count | 3117 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4101/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatica

[GitHub] [hadoop] tasanuma commented on pull request #3915: HDFS-16434. Add opname to read/write lock for remaining operations

2022-03-24 Thread GitBox


tasanuma commented on pull request #3915:
URL: https://github.com/apache/hadoop/pull/3915#issuecomment-1077804296


   #4064 added a new `namesystem.writeUnlock()`. Could you add the opname too? 
The others look good to me.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on a change in pull request #3915: HDFS-16434. Add opname to read/write lock for remaining operations

2022-03-24 Thread GitBox


tasanuma commented on a change in pull request #3915:
URL: https://github.com/apache/hadoop/pull/3915#discussion_r834081133



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeAdminDefaultMonitor.java
##
@@ -158,7 +158,7 @@ public void run() {
   LOG.warn("DatanodeAdminMonitor caught exception when processing node.",
   e);
 } finally {
-  namesystem.writeUnlock();
+  namesystem.writeUnlock("datanodeAdminMonitorThread");

Review comment:
   ```suggestion
 namesystem.writeUnlock("DatanodeAdminMonitorThread");
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease

2022-03-24 Thread GitBox


Hexiaoqiao commented on a change in pull request #4057:
URL: https://github.com/apache/hadoop/pull/4057#discussion_r834482510



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -2751,6 +2751,13 @@ public boolean checkBlockReportLease(BlockReportContext 
context,
   return true;
 }
 DatanodeDescriptor node = datanodeManager.getDatanode(nodeID);
+if (node == null) {
+  final UnregisteredNodeException e = new 
UnregisteredNodeException(nodeID, null);
+  NameNode.stateChangeLog.error("BLOCK* NameSystem.getDatanode: " + "Data 
node " + nodeID +

Review comment:
   I am confused if we need log here which upper method also do that when 
meet UnregisteredNodeException. Although it is DEBU level now, we could improve 
that to WARN, right?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15245) S3AInputStream.skip() to use lazy seek

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15245?focusedWorklogId=747304&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747304
 ]

ASF GitHub Bot logged work on HADOOP-15245:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 16:10
Start Date: 24/Mar/22 16:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3927:
URL: https://github.com/apache/hadoop/pull/3927#issuecomment-1069321823






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747304)
Time Spent: 4h 20m  (was: 4h 10m)

> S3AInputStream.skip() to use lazy seek
> --
>
> Key: HADOOP-15245
> URL: https://issues.apache.org/jira/browse/HADOOP-15245
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> the default skip() does a read and discard of all bytes, no matter how far 
> ahead the skip is. This is very inefficient if the skip() is being done on 
> S3A random IO, though exactly what to do when in sequential mode.
> Proposed: 
> * add an optimized version of S3AInputStream.skip() which does a lazy seek, 
> which itself will decided when to skip() vs issue a new GET.
> * add some more instrumentation to measure how often this gets used



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3927: HADOOP-15245. S3AInputStream.skip() to use lazy seek

2022-03-24 Thread GitBox


hadoop-yetus removed a comment on pull request #3927:
URL: https://github.com/apache/hadoop/pull/3927#issuecomment-1069321823






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3960: HDFS-16446. Consider ioutils of disk when choosing volume

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #3960:
URL: https://github.com/apache/hadoop/pull/3960#issuecomment-1077761539


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  26m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 39s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  trunk passed  |
   | +0 :ok: |  spotbugs  |   0m 32s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  25m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 28s |  |  the patch passed  |
   | -1 :x: |  cc  |  20m 28s | 
[/results-compile-cc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3960/5/artifact/out/results-compile-cc-root.txt)
 |  root generated 30 new + 177 unchanged - 28 fixed = 207 total (was 205)  |
   | +1 :green_heart: |  golang  |  20m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  20m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 47s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 49s |  |  the patch passed  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   3m 50s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3960/5/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 7 new + 0 unchanged - 0 fixed = 7 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  26m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 28s |  |  hadoop-project in the patch 
passed.  |
   | -1 :x: |  unit  |  17m 51s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3960/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 353m  8s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3960/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   1m  1s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3960/5/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 569m 16s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.server.datanode.DataNode.diskIOUtilManager; locked 50% 
of time  Unsynchronized access at DataNode.java:50% of time  Unsynchronized 
access at DataNode.java:[line 2473] |
   |  |  
org.apache.hadoop.hdfs.server.datanode.DiskIOUtilManager.getDiskIoUtils() may 
fail to close stream  At DiskIOUtilManager.java:stream  At 
DiskIOUtilManager.java:[line 209] |
   |  |  Possible null pointer dereference in new 
org.apache.hadoop.hdfs.server.datanode.DiskIOUtilManager$DiskLocation(StorageLocation)
 due to return value of called method  Dereferenced at 
DiskIOUtilManager.java:new 
org.apache.hadoop.hdfs.server.datanode.DiskIOUtilManager$DiskLocation(StorageLocation)
 due to return value of called method  Dereferenced at 
DiskIOUtilManager.java:[line 54] |
   |  |  Possible null pointer dereference in new 
org.apache.hadoop.hdfs.server.datanode.DiskIOUtilManager$DiskLocation(StorageLocation)
 due to return value 

[GitHub] [hadoop] zuston commented on pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…

2022-03-24 Thread GitBox


zuston commented on pull request #4060:
URL: https://github.com/apache/hadoop/pull/4060#issuecomment-1077727264


   Thanks for your patience and review @9uapaw 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zuston opened a new pull request #4105: YARN-11088. Introduce the config to control the AM allocated to non-e…

2022-03-24 Thread GitBox


zuston opened a new pull request #4105:
URL: https://github.com/apache/hadoop/pull/4105


   …xclusive nodes
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13386) Upgrade Avro to 1.8.x or later

2022-03-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13386:
---
Assignee: PJ Fanning
  Status: Patch Available  (was: Reopened)

> Upgrade Avro to 1.8.x or later
> --
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13386) Upgrade Avro to 1.8.x or later

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?focusedWorklogId=747228&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747228
 ]

ASF GitHub Bot logged work on HADOOP-13386:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 13:32
Start Date: 24/Mar/22 13:32
Worklog Time Spent: 10m 
  Work Description: aajisaka edited a comment on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077631722


   Hi @pjfanning - I'm not asking you to fix the issue, please ignore the 
errors by adding some entries in 
https://github.com/apache/hadoop/blob/1b29c9bfeee0035dd042357038b963843169d44c/hadoop-mapreduce-project/dev-support/findbugs-exclude.xml
   
   Note that I tried to upgrade Avro to 1.10.2 but the bug is still there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747228)
Time Spent: 4h 10m  (was: 4h)

> Upgrade Avro to 1.8.x or later
> --
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka edited a comment on pull request #3990: [HADOOP-13386] upgrade to avro 1.9.2

2022-03-24 Thread GitBox


aajisaka edited a comment on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077631722


   Hi @pjfanning - I'm not asking you to fix the issue, please ignore the 
errors by adding some entries in 
https://github.com/apache/hadoop/blob/1b29c9bfeee0035dd042357038b963843169d44c/hadoop-mapreduce-project/dev-support/findbugs-exclude.xml
   
   Note that I tried to upgrade Avro to 1.10.2 but the bug is still there.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18142) Increase precommit job timeout from 24 hr to 30 hr

2022-03-24 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503046#comment-17503046
 ] 

Viraj Jasani edited comment on HADOOP-18142 at 3/24/22, 1:30 PM:
-

[~kgyrtkirk] I went through the changes you have mentioned above, nice work 
indeed! Based on my recent observation with 
[PR#4000|https://github.com/apache/hadoop/pull/4000], Hadoop full build 
definitely exceeds current timeout of 24 hr (regardless of whether we rate 
limit i.e. run only 1 build or run multiple builds concurrently) hence 
increasing timeout to 30 hr is a certain requirement for the entire hadoop 
build to be finished.

For the improvements that you have mentioned above (specifically disabling 
concurrent builds and auto-kill for updated PR), will create a new follow-up 
Jira.


was (Author: vjasani):
[~kgyrtkirk] I went through the changes you have mentioned above, nice work 
indeed! Based on my recent observation with 
[PR#4000|https://github.com/apache/hadoop/pull/4000], Hadoop full build 
definitely exceeds current timeout of 24 hr (regardless of whether we rate 
limit i.e. run only 1 build or run multiple builds concurrently) hence 
increasing timeout to 30 hr is a certain requirement for the entire hadoop 
build to be finished.

For the improvements that you have mentioned above (specifically disabling 
concurrent builds and auto-kill for updated PR), it might be worth opening a 
new Jira, WDYT?

> Increase precommit job timeout from 24 hr to 30 hr
> --
>
> Key: HADOOP-18142
> URL: https://issues.apache.org/jira/browse/HADOOP-18142
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As per some recent precommit build results, full build QA is not getting 
> completed in 24 hr (recent example 
> [here|https://github.com/apache/hadoop/pull/4000] where more than 5 builds 
> timed out after 24 hr). We should increase it to 30 hr.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18142) Increase precommit job timeout from 24 hr to 30 hr

2022-03-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-18142:
--
Target Version/s: 3.4.0
  Status: Patch Available  (was: In Progress)

> Increase precommit job timeout from 24 hr to 30 hr
> --
>
> Key: HADOOP-18142
> URL: https://issues.apache.org/jira/browse/HADOOP-18142
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As per some recent precommit build results, full build QA is not getting 
> completed in 24 hr (recent example 
> [here|https://github.com/apache/hadoop/pull/4000] where more than 5 builds 
> timed out after 24 hr). We should increase it to 30 hr.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-18142) Increase precommit job timeout from 24 hr to 30 hr

2022-03-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-18142 started by Viraj Jasani.
-
> Increase precommit job timeout from 24 hr to 30 hr
> --
>
> Key: HADOOP-18142
> URL: https://issues.apache.org/jira/browse/HADOOP-18142
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As per some recent precommit build results, full build QA is not getting 
> completed in 24 hr (recent example 
> [here|https://github.com/apache/hadoop/pull/4000] where more than 5 builds 
> timed out after 24 hr). We should increase it to 30 hr.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut removed a comment on pull request #4009: HDFS-16477. [SPS]: Add metric PendingSPSPaths for getting the number of paths to be processed by SPS

2022-03-24 Thread GitBox


tomscut removed a comment on pull request #4009:
URL: https://github.com/apache/hadoop/pull/4009#issuecomment-1073427612


   Hi @ayushtkn @Hexiaoqiao @ferhui , could you please take a look. Thanks.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13386) Upgrade Avro to 1.8.x or later

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?focusedWorklogId=747177&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747177
 ]

ASF GitHub Bot logged work on HADOOP-13386:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 12:23
Start Date: 24/Mar/22 12:23
Worklog Time Spent: 10m 
  Work Description: aajisaka edited a comment on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077570607


   Hi @pjfanning - Would you ignore the findbugs error? 
   
   >  A known null value is checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:[line 1238]
   
   The class is generated by Avro and the generated source code is as follows. 
I think it is a bug in Avro side and we can simply ignore.
   ```java
 java.lang.CharSequence k0 = null;
 k0 = in.readString(k0 instanceof Utf8 ? (Utf8)k0 : null);
 java.lang.CharSequence v0 = null;
 v0 = in.readString(v0 instanceof Utf8 ? (Utf8)v0 : null);
   ```
   
   Sorry for the late response.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747177)
Time Spent: 4h  (was: 3h 50m)

> Upgrade Avro to 1.8.x or later
> --
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka edited a comment on pull request #3990: [HADOOP-13386] upgrade to avro 1.9.2

2022-03-24 Thread GitBox


aajisaka edited a comment on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077570607


   Hi @pjfanning - Would you ignore the findbugs error? 
   
   >  A known null value is checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:[line 1238]
   
   The class is generated by Avro and the generated source code is as follows. 
I think it is a bug in Avro side and we can simply ignore.
   ```java
 java.lang.CharSequence k0 = null;
 k0 = in.readString(k0 instanceof Utf8 ? (Utf8)k0 : null);
 java.lang.CharSequence v0 = null;
 v0 = in.readString(v0 instanceof Utf8 ? (Utf8)v0 : null);
   ```
   
   Sorry for the late response.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13386) Upgrade Avro to 1.8.x or later

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?focusedWorklogId=747176&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747176
 ]

ASF GitHub Bot logged work on HADOOP-13386:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 12:22
Start Date: 24/Mar/22 12:22
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077570607


   Hi @pjfanning - Would you ignore the findbugs error? 
   
   >  A known null value is checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:[line 1238]
   
   The class is generated by Avro and the generated source code is as follows. 
I think it is a bug in Avro side and we can simply ignore.
   ```
 java.lang.CharSequence k0 = null;
 k0 = in.readString(k0 instanceof Utf8 ? (Utf8)k0 : null);
 java.lang.CharSequence v0 = null;
 v0 = in.readString(v0 instanceof Utf8 ? (Utf8)v0 : null);
   ```
   
   Sorry for the late response.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747176)
Time Spent: 3h 50m  (was: 3h 40m)

> Upgrade Avro to 1.8.x or later
> --
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #3990: [HADOOP-13386] upgrade to avro 1.9.2

2022-03-24 Thread GitBox


aajisaka commented on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077570607


   Hi @pjfanning - Would you ignore the findbugs error? 
   
   >  A known null value is checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:checked to see if it is an instance of 
org.apache.avro.util.Utf8 in 
org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.customDecode(ResolvingDecoder)
 At JobSubmitted.java:[line 1238]
   
   The class is generated by Avro and the generated source code is as follows. 
I think it is a bug in Avro side and we can simply ignore.
   ```
 java.lang.CharSequence k0 = null;
 k0 = in.readString(k0 instanceof Utf8 ? (Utf8)k0 : null);
 java.lang.CharSequence v0 = null;
 v0 = in.readString(v0 instanceof Utf8 ? (Utf8)v0 : null);
   ```
   
   Sorry for the late response.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18171) NameNode Access Time Precision

2022-03-24 Thread Doug (Jira)
Doug created HADOOP-18171:
-

 Summary: NameNode Access Time Precision
 Key: HADOOP-18171
 URL: https://issues.apache.org/jira/browse/HADOOP-18171
 Project: Hadoop Common
  Issue Type: Improvement
 Environment: As of now we are on CDH version 6.3.4 and we are planning 
to upgrade it to CDP version 7.1.4. for that cloudera want us to disable 
namenode property dfs.access.time.precision by changing it's value to 0. 
Current value for this property is 1 hour. so my question is that how this 
value is impacting in current scenario? what is the effect of that and what 
will happen If I make it to zero.
Reporter: Doug
 Attachments: namenodeaccesstime.png





--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4104: HDFS-16520. Improve EC pread: avoid potential reading whole block

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4104:
URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1077561251


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  96m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4104 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 993df16c3cc5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ca5aafce9f31472ec2b9b58bfdb7349371a98aa6 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/2/testReport/ |
   | Max. process+thread count | 726 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org


[GitHub] [hadoop] cndaimin commented on pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException

2022-03-24 Thread GitBox


cndaimin commented on pull request #4077:
URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1077544022


   @tomscut Thanks for your review and advice!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18145) Fileutil's unzip method causes unzipped files to lose their original permissions

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18145?focusedWorklogId=747136&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747136
 ]

ASF GitHub Bot logged work on HADOOP-18145:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 11:33
Start Date: 24/Mar/22 11:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077529427


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  21m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  25m 46s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  23m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 58s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4036/14/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 60 
unchanged - 0 fixed = 62 total (was 60)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 213m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4036/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4036 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 49ac92c75ee7 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eb83019dbe1e8cc743cbd860b2e0dcd287bb6ab2 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Tes

[GitHub] [hadoop] hadoop-yetus commented on pull request #4036: HADOOP-18145.Decompress the ZIP file and retain the original file per…

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077529427


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  21m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  25m 46s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  23m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 58s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4036/14/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 60 
unchanged - 0 fixed = 62 total (was 60)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 213m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4036/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4036 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 49ac92c75ee7 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / eb83019dbe1e8cc743cbd860b2e0dcd287bb6ab2 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4036/14/testReport/ |
   | Max. process+thread count | 1246 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4036/14/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Y

[jira] [Work logged] (HADOOP-13386) Upgrade Avro to 1.8.x or later

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?focusedWorklogId=747069&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747069
 ]

ASF GitHub Bot logged work on HADOOP-13386:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 09:55
Start Date: 24/Mar/22 09:55
Worklog Time Spent: 10m 
  Work Description: brahmareddybattula commented on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077442958


   @aajisaka  do you've cycles to review this... Patch lgtm.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747069)
Time Spent: 3h 40m  (was: 3.5h)

> Upgrade Avro to 1.8.x or later
> --
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Ben McCann
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brahmareddybattula commented on pull request #3990: [HADOOP-13386] upgrade to avro 1.9.2

2022-03-24 Thread GitBox


brahmareddybattula commented on pull request #3990:
URL: https://github.com/apache/hadoop/pull/3990#issuecomment-1077442958


   @aajisaka  do you've cycles to review this... Patch lgtm.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15245) S3AInputStream.skip() to use lazy seek

2022-03-24 Thread Daniel Carl Jones (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Carl Jones reassigned HADOOP-15245:
--

Assignee: Ahmar Suhail

> S3AInputStream.skip() to use lazy seek
> --
>
> Key: HADOOP-15245
> URL: https://issues.apache.org/jira/browse/HADOOP-15245
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> the default skip() does a read and discard of all bytes, no matter how far 
> ahead the skip is. This is very inefficient if the skip() is being done on 
> S3A random IO, though exactly what to do when in sequential mode.
> Proposed: 
> * add an optimized version of S3AInputStream.skip() which does a lazy seek, 
> which itself will decided when to skip() vs issue a new GET.
> * add some more instrumentation to measure how often this gets used



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12020) Support AWS S3 reduced redundancy storage class

2022-03-24 Thread Daniel Carl Jones (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Carl Jones reassigned HADOOP-12020:
--

Assignee: Monthon Klongklaew

> Support AWS S3 reduced redundancy storage class
> ---
>
> Key: HADOOP-12020
> URL: https://issues.apache.org/jira/browse/HADOOP-12020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
> Environment: Hadoop on AWS
>Reporter: Yann Landrin-Schweitzer
>Assignee: Monthon Klongklaew
>Priority: Major
>
> Amazon S3 uses, by default, the NORMAL_STORAGE class for s3 objects.
> This offers, according to Amazon's material, 99.% reliability.
> For many applications, however, the 99.99% reliability offered by the 
> REDUCED_REDUNDANCY storage class is amply sufficient, and comes with a 
> significant cost saving.
> HDFS, when using the legacy s3n protocol, or the new s3a scheme, should 
> support overriding the default storage class of created s3 objects so that 
> users can take advantage of this cost benefit.
> This would require minor changes of the s3n and s3a drivers, using 
> a configuration property fs.s3n.storage.class to override the default storage 
> when desirable. 
> This override could be implemented in Jets3tNativeFileSystemStore with:
>   S3Object object = new S3Object(key);
>   ...
>   if(storageClass!=null)  object.setStorageClass(storageClass);
> It would take a more complex form in s3a, e.g. setting:
> InitiateMultipartUploadRequest initiateMPURequest =
> new InitiateMultipartUploadRequest(bucket, key, om);
> if(storageClass !=null ) {
> initiateMPURequest = 
> initiateMPURequest.withStorageClass(storageClass);
> }
> and similar statements in various places.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16155) S3AInputStream read(bytes[]) to not retry on read failure: pass action up

2022-03-24 Thread Daniel Carl Jones (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Carl Jones reassigned HADOOP-16155:
--

Assignee: Ahmar Suhail

> S3AInputStream read(bytes[]) to not retry on read failure: pass action up
> -
>
> Key: HADOOP-16155
> URL: https://issues.apache.org/jira/browse/HADOOP-16155
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ahmar Suhail
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The S3AInputStream reacts to read(byte[]) failure by reopening the stream, 
> just as for the single byte read(). We shouldn't need to do that. Instead 
> just close the stream, return 0 and let the caller decided what to do. 
> why so? 
> # its in the contract of InputStream.read(bytes[]),
> # readFully() can handle the 0 in its loop
> # other apps can decided what to do.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] singer-bin commented on pull request #4069: HDFS-16457.Make fs.getspaceused.classname reconfigurable

2022-03-24 Thread GitBox


singer-bin commented on pull request #4069:
URL: https://github.com/apache/hadoop/pull/4069#issuecomment-1077421729


   Can you review my code, thanks a lot.  @sunchao @tasanuma @ayushtkn 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-14661) S3A to support Requester Pays Buckets

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14661?focusedWorklogId=747031&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-747031
 ]

ASF GitHub Bot logged work on HADOOP-14661:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 09:01
Start Date: 24/Mar/22 09:01
Worklog Time Spent: 10m 
  Work Description: dannycjones commented on pull request #3962:
URL: https://github.com/apache/hadoop/pull/3962#issuecomment-1077391264


   Thanks Steve, Mukund, and Monthon for the reviews!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 747031)
Time Spent: 6.5h  (was: 6h 20m)

> S3A to support Requester Pays Buckets
> -
>
> Key: HADOOP-14661
> URL: https://issues.apache.org/jira/browse/HADOOP-14661
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, util
>Affects Versions: 3.0.0-alpha3
>Reporter: Mandus Momberg
>Assignee: Daniel Carl Jones
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.3
>
> Attachments: HADOOP-14661.patch
>
>   Original Estimate: 2h
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Amazon S3 has the ability to charge the requester for the cost of accessing 
> S3. This is called Requester Pays Buckets. 
> In order to access these buckets, each request needs to be signed with a 
> specific header. 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannycjones commented on pull request #3962: HADOOP-14661. Add S3 requester pays bucket support to S3A

2022-03-24 Thread GitBox


dannycjones commented on pull request #3962:
URL: https://github.com/apache/hadoop/pull/3962#issuecomment-1077391264


   Thanks Steve, Mukund, and Monthon for the reviews!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liubingxing commented on pull request #4088: HDFS-16514. Reduce the failover sleep time if multiple namenode are c…

2022-03-24 Thread GitBox


liubingxing commented on pull request #4088:
URL: https://github.com/apache/hadoop/pull/4088#issuecomment-1077376875


   @tasanuma Please take a look at this. Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4104: HDFS-16520. Improve EC pread: avoid potential reading whole block

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4104:
URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1077373154


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-client: The patch generated 3 new + 16 
unchanged - 0 fixed = 19 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4104 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ef7ab1a1ea82 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / afaa5ded02cde8eac9a6521f3fdea4be314e011c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/1/testReport/ |
   | Max. process+thread count | 671 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |

[GitHub] [hadoop] aajisaka merged pull request #4103: YARN-10720. (branch-2.10) YARN WebAppProxyServlet should support connection timeout…

2022-03-24 Thread GitBox


aajisaka merged pull request #4103:
URL: https://github.com/apache/hadoop/pull/4103


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #4102: YARN-10720. (branch-3.2) YARN WebAppProxyServlet should support connection timeout…

2022-03-24 Thread GitBox


aajisaka merged pull request #4102:
URL: https://github.com/apache/hadoop/pull/4102


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4102: YARN-10720. (branch-3.2) YARN WebAppProxyServlet should support connection timeout…

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4102:
URL: https://github.com/apache/hadoop/pull/4102#issuecomment-1077367563


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 25s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 29s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   7m 46s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m 56s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   4m 30s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  15m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   6m 54s |  |  
hadoop-yarn-project_hadoop-yarn generated 0 new + 192 unchanged - 10 fixed = 
192 total (was 202)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   4m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 48s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m  2s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 53s |  |  hadoop-yarn-server-web-proxy in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 117m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4102/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4102 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 9f85641c1b81 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / dbeb41b46a2cef9a48983add9c3b191915bd6de5 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4102/1/testReport/ |
   | Max. process+thread count | 334 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
U: hadoop-yarn-project/hadoop-yarn |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4102/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4103: YARN-10720. (branch-2.10) YARN WebAppProxyServlet should support connection timeout…

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4103:
URL: https://github.com/apache/hadoop/pull/4103#issuecomment-1077364475


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-2.10 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 14s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  12m 43s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |   8m 19s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  compile  |   6m 55s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  branch-2.10 passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 53s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javac  |   7m 53s |  |  
hadoop-yarn-project_hadoop-yarn-jdkAzulSystems,Inc.-1.7.0_262-b10 with JDK Azul 
Systems, Inc.-1.7.0_262-b10 generated 0 new + 138 unchanged - 10 fixed = 138 
total (was 148)  |
   | +1 :green_heart: |  compile  |   6m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  javac  |   6m 39s |  |  
hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07
 with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 generated 0 new 
+ 128 unchanged - 10 fixed = 128 total (was 138)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4103/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 231 unchanged 
- 0 fixed = 232 total (was 231)  |
   | +1 :green_heart: |  mvnsite  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 18s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 42s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 14s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 36s |  |  hadoop-yarn-server-web-proxy in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  88m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4103/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4103 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux cd09d98a63a0 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / bbf1250e054faedcd40e14024f3c4be1717838ea |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, 
Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multib

[GitHub] [hadoop] hadoop-yetus commented on pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread

2022-03-24 Thread GitBox


hadoop-yetus commented on pull request #4032:
URL: https://github.com/apache/hadoop/pull/4032#issuecomment-1077343940


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 158m 58s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4032/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 34s |  |  ASF License check generated no 
output?  |
   |  |   | 259m  7s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.TestSnapshotPathINodes |
   |   | hadoop.hdfs.server.namenode.TestFSImageWithAcl |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotReplication |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestFileTruncate |
   |   | hadoop.hdfs.server.namenode.TestDeleteRace |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4032/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4032 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0b01f791297c 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 499ee0298fb320b2fb5238fd283ff2ca53fc4f02 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4032/2/testReport/ |
   | Max. process+thread count | 317

[jira] [Work logged] (HADOOP-18145) Fileutil's unzip method causes unzipped files to lose their original permissions

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18145?focusedWorklogId=746997&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-746997
 ]

ASF GitHub Bot logged work on HADOOP-18145:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 08:00
Start Date: 24/Mar/22 08:00
Worklog Time Spent: 10m 
  Work Description: zhongjingxiong commented on a change in pull request 
#4036:
URL: https://github.com/apache/hadoop/pull/4036#discussion_r834016387



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
##
@@ -706,35 +706,40 @@ public void testCreateLocalTempFile() throws IOException {
   public void testUnZip() throws IOException {
 // make sa simple zip
 final File simpleZip = new File(del, FILE);
-OutputStream os = new FileOutputStream(simpleZip); 
-ZipOutputStream tos = new ZipOutputStream(os);
-try {
-  ZipEntry ze = new ZipEntry("foo");
-  byte[] data = "some-content".getBytes("UTF-8");
-  ze.setSize(data.length);
-  tos.putNextEntry(ze);
-  tos.write(data);
-  tos.closeEntry();
-  tos.flush();
-  tos.finish();
-} finally {
-  tos.close();
-}
-
-// successfully unzip it into an existing dir:
-FileUtil.unZip(simpleZip, tmp);
-// check result:
-assertTrue(new File(tmp, "foo").exists());
-assertEquals(12, new File(tmp, "foo").length());
-
-final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-regularFile.createNewFile();
-assertTrue(regularFile.exists());
-try {
-  FileUtil.unZip(simpleZip, regularFile);
-  assertTrue("An IOException expected.", false);
-} catch (IOException ioe) {
-  // okay
+try (OutputStream os = new FileOutputStream(simpleZip);
+ ZipArchiveOutputStream tos = new ZipArchiveOutputStream(os)) {
+  try {
+ZipArchiveEntry ze = new  ZipArchiveEntry("foo");
+ze.setUnixMode(0555);
+byte[] data = "some-content".getBytes("UTF-8");
+ze.setSize(data.length);
+tos.putArchiveEntry(ze);
+tos.write(data);
+tos.closeArchiveEntry();
+tos.flush();
+tos.finish();
+  } finally {
+tos.close();
+  }
+
+  // successfully unzip it into an existing dir:
+  FileUtil.unZip(simpleZip, tmp);
+  // check result:
+  assertTrue(new File(tmp, "foo").exists());
+  assertEquals(12, new File(tmp, "foo").length());
+  assertTrue("file lacks execute permissions", new File(tmp, 
"foo").canExecute());
+  assertFalse("file has write permissions", new File(tmp, 
"foo").canWrite());
+  assertTrue("file lacks read permissions", new File(tmp, 
"foo").canRead());
+
+  final File regularFile = new File(tmp, 
"QuickBrownFoxJumpsOverTheLazyDog");
+  regularFile.createNewFile();
+  assertTrue(regularFile.exists());
+  try {

Review comment:
   The code for the latest branch has been merged.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 746997)
Time Spent: 6h 50m  (was: 6h 40m)

> Fileutil's unzip method causes unzipped files to lose their original 
> permissions
> 
>
> Key: HADOOP-18145
> URL: https://issues.apache.org/jira/browse/HADOOP-18145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: jingxiong zhong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> When Spark decompresses the zip file, if the original file has the executable 
> permission, but the unzip method of FileUtil is invoked, the decompressed 
> file loses the executable permission, we should save the original permission



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhongjingxiong commented on a change in pull request #4036: HADOOP-18145.Decompress the ZIP file and retain the original file per…

2022-03-24 Thread GitBox


zhongjingxiong commented on a change in pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#discussion_r834016387



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
##
@@ -706,35 +706,40 @@ public void testCreateLocalTempFile() throws IOException {
   public void testUnZip() throws IOException {
 // make sa simple zip
 final File simpleZip = new File(del, FILE);
-OutputStream os = new FileOutputStream(simpleZip); 
-ZipOutputStream tos = new ZipOutputStream(os);
-try {
-  ZipEntry ze = new ZipEntry("foo");
-  byte[] data = "some-content".getBytes("UTF-8");
-  ze.setSize(data.length);
-  tos.putNextEntry(ze);
-  tos.write(data);
-  tos.closeEntry();
-  tos.flush();
-  tos.finish();
-} finally {
-  tos.close();
-}
-
-// successfully unzip it into an existing dir:
-FileUtil.unZip(simpleZip, tmp);
-// check result:
-assertTrue(new File(tmp, "foo").exists());
-assertEquals(12, new File(tmp, "foo").length());
-
-final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-regularFile.createNewFile();
-assertTrue(regularFile.exists());
-try {
-  FileUtil.unZip(simpleZip, regularFile);
-  assertTrue("An IOException expected.", false);
-} catch (IOException ioe) {
-  // okay
+try (OutputStream os = new FileOutputStream(simpleZip);
+ ZipArchiveOutputStream tos = new ZipArchiveOutputStream(os)) {
+  try {
+ZipArchiveEntry ze = new  ZipArchiveEntry("foo");
+ze.setUnixMode(0555);
+byte[] data = "some-content".getBytes("UTF-8");
+ze.setSize(data.length);
+tos.putArchiveEntry(ze);
+tos.write(data);
+tos.closeArchiveEntry();
+tos.flush();
+tos.finish();
+  } finally {
+tos.close();
+  }
+
+  // successfully unzip it into an existing dir:
+  FileUtil.unZip(simpleZip, tmp);
+  // check result:
+  assertTrue(new File(tmp, "foo").exists());
+  assertEquals(12, new File(tmp, "foo").length());
+  assertTrue("file lacks execute permissions", new File(tmp, 
"foo").canExecute());
+  assertFalse("file has write permissions", new File(tmp, 
"foo").canWrite());
+  assertTrue("file lacks read permissions", new File(tmp, 
"foo").canRead());
+
+  final File regularFile = new File(tmp, 
"QuickBrownFoxJumpsOverTheLazyDog");
+  regularFile.createNewFile();
+  assertTrue(regularFile.exists());
+  try {

Review comment:
   The code for the latest branch has been merged.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18145) Fileutil's unzip method causes unzipped files to lose their original permissions

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18145?focusedWorklogId=746995&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-746995
 ]

ASF GitHub Bot logged work on HADOOP-18145:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 07:59
Start Date: 24/Mar/22 07:59
Worklog Time Spent: 10m 
  Work Description: zhongjingxiong commented on a change in pull request 
#4036:
URL: https://github.com/apache/hadoop/pull/4036#discussion_r834015838



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
##
@@ -706,35 +706,40 @@ public void testCreateLocalTempFile() throws IOException {
   public void testUnZip() throws IOException {
 // make sa simple zip
 final File simpleZip = new File(del, FILE);
-OutputStream os = new FileOutputStream(simpleZip); 
-ZipOutputStream tos = new ZipOutputStream(os);
-try {
-  ZipEntry ze = new ZipEntry("foo");
-  byte[] data = "some-content".getBytes("UTF-8");
-  ze.setSize(data.length);
-  tos.putNextEntry(ze);
-  tos.write(data);
-  tos.closeEntry();
-  tos.flush();
-  tos.finish();
-} finally {
-  tos.close();
-}
-
-// successfully unzip it into an existing dir:
-FileUtil.unZip(simpleZip, tmp);
-// check result:
-assertTrue(new File(tmp, "foo").exists());
-assertEquals(12, new File(tmp, "foo").length());
-
-final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-regularFile.createNewFile();
-assertTrue(regularFile.exists());
-try {
-  FileUtil.unZip(simpleZip, regularFile);
-  assertTrue("An IOException expected.", false);
-} catch (IOException ioe) {
-  // okay
+try (OutputStream os = new FileOutputStream(simpleZip);
+ ZipArchiveOutputStream tos = new ZipArchiveOutputStream(os)) {
+  try {
+ZipArchiveEntry ze = new  ZipArchiveEntry("foo");
+ze.setUnixMode(0555);
+byte[] data = "some-content".getBytes("UTF-8");
+ze.setSize(data.length);
+tos.putArchiveEntry(ze);
+tos.write(data);
+tos.closeArchiveEntry();
+tos.flush();
+tos.finish();
+  } finally {
+tos.close();

Review comment:
   Let me modify that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 746995)
Time Spent: 6h 40m  (was: 6.5h)

> Fileutil's unzip method causes unzipped files to lose their original 
> permissions
> 
>
> Key: HADOOP-18145
> URL: https://issues.apache.org/jira/browse/HADOOP-18145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: jingxiong zhong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> When Spark decompresses the zip file, if the original file has the executable 
> permission, but the unzip method of FileUtil is invoked, the decompressed 
> file loses the executable permission, we should save the original permission



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhongjingxiong commented on a change in pull request #4036: HADOOP-18145.Decompress the ZIP file and retain the original file per…

2022-03-24 Thread GitBox


zhongjingxiong commented on a change in pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#discussion_r834015838



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
##
@@ -706,35 +706,40 @@ public void testCreateLocalTempFile() throws IOException {
   public void testUnZip() throws IOException {
 // make sa simple zip
 final File simpleZip = new File(del, FILE);
-OutputStream os = new FileOutputStream(simpleZip); 
-ZipOutputStream tos = new ZipOutputStream(os);
-try {
-  ZipEntry ze = new ZipEntry("foo");
-  byte[] data = "some-content".getBytes("UTF-8");
-  ze.setSize(data.length);
-  tos.putNextEntry(ze);
-  tos.write(data);
-  tos.closeEntry();
-  tos.flush();
-  tos.finish();
-} finally {
-  tos.close();
-}
-
-// successfully unzip it into an existing dir:
-FileUtil.unZip(simpleZip, tmp);
-// check result:
-assertTrue(new File(tmp, "foo").exists());
-assertEquals(12, new File(tmp, "foo").length());
-
-final File regularFile = new File(tmp, "QuickBrownFoxJumpsOverTheLazyDog");
-regularFile.createNewFile();
-assertTrue(regularFile.exists());
-try {
-  FileUtil.unZip(simpleZip, regularFile);
-  assertTrue("An IOException expected.", false);
-} catch (IOException ioe) {
-  // okay
+try (OutputStream os = new FileOutputStream(simpleZip);
+ ZipArchiveOutputStream tos = new ZipArchiveOutputStream(os)) {
+  try {
+ZipArchiveEntry ze = new  ZipArchiveEntry("foo");
+ze.setUnixMode(0555);
+byte[] data = "some-content".getBytes("UTF-8");
+ze.setSize(data.length);
+tos.putArchiveEntry(ze);
+tos.write(data);
+tos.closeArchiveEntry();
+tos.flush();
+tos.finish();
+  } finally {
+tos.close();

Review comment:
   Let me modify that.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liubingxing commented on pull request #4032: HDFS-16484. [SPS]: Fix an infinite loop bug in SPSPathIdProcessor thread

2022-03-24 Thread GitBox


liubingxing commented on pull request #4032:
URL: https://github.com/apache/hadoop/pull/4032#issuecomment-1077322620


   Hi @tasanuma  could you please take a look at this? Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18145) Fileutil's unzip method causes unzipped files to lose their original permissions

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18145?focusedWorklogId=746981&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-746981
 ]

ASF GitHub Bot logged work on HADOOP-18145:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 07:23
Start Date: 24/Mar/22 07:23
Worklog Time Spent: 10m 
  Work Description: zhongjingxiong edited a comment on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313810






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 746981)
Time Spent: 6h 20m  (was: 6h 10m)

> Fileutil's unzip method causes unzipped files to lose their original 
> permissions
> 
>
> Key: HADOOP-18145
> URL: https://issues.apache.org/jira/browse/HADOOP-18145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: jingxiong zhong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> When Spark decompresses the zip file, if the original file has the executable 
> permission, but the unzip method of FileUtil is invoked, the decompressed 
> file loses the executable permission, we should save the original permission



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18145) Fileutil's unzip method causes unzipped files to lose their original permissions

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18145?focusedWorklogId=746982&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-746982
 ]

ASF GitHub Bot logged work on HADOOP-18145:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 07:23
Start Date: 24/Mar/22 07:23
Worklog Time Spent: 10m 
  Work Description: zhongjingxiong removed a comment on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313260


   > 
   
   Yes, if I set the suid flag, the file will be executed with the same 
permissions as the owner of the executable file. I think it's dangerous. So, in 
the code, any file with a suid flag will end up with only executable permission 
+x, this is only about rwx permissions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 746982)
Time Spent: 6.5h  (was: 6h 20m)

> Fileutil's unzip method causes unzipped files to lose their original 
> permissions
> 
>
> Key: HADOOP-18145
> URL: https://issues.apache.org/jira/browse/HADOOP-18145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: jingxiong zhong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> When Spark decompresses the zip file, if the original file has the executable 
> permission, but the unzip method of FileUtil is invoked, the decompressed 
> file loses the executable permission, we should save the original permission



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhongjingxiong removed a comment on pull request #4036: HADOOP-18145.Decompress the ZIP file and retain the original file per…

2022-03-24 Thread GitBox


zhongjingxiong removed a comment on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313260


   > 
   
   Yes, if I set the suid flag, the file will be executed with the same 
permissions as the owner of the executable file. I think it's dangerous. So, in 
the code, any file with a suid flag will end up with only executable permission 
+x, this is only about rwx permissions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhongjingxiong edited a comment on pull request #4036: HADOOP-18145.Decompress the ZIP file and retain the original file per…

2022-03-24 Thread GitBox


zhongjingxiong edited a comment on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313810






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18145) Fileutil's unzip method causes unzipped files to lose their original permissions

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18145?focusedWorklogId=746980&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-746980
 ]

ASF GitHub Bot logged work on HADOOP-18145:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 07:22
Start Date: 24/Mar/22 07:22
Worklog Time Spent: 10m 
  Work Description: zhongjingxiong commented on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313810


   > Yes, if I set the suid flag, the file will be executed with the same 
permissions as the owner of the executable file. I think it's dangerous. So, in 
the code, any file with a suid flag will end up with only executable permission 
+x, this is only about rwx permissions.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 746980)
Time Spent: 6h 10m  (was: 6h)

> Fileutil's unzip method causes unzipped files to lose their original 
> permissions
> 
>
> Key: HADOOP-18145
> URL: https://issues.apache.org/jira/browse/HADOOP-18145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: jingxiong zhong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> When Spark decompresses the zip file, if the original file has the executable 
> permission, but the unzip method of FileUtil is invoked, the decompressed 
> file loses the executable permission, we should save the original permission



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhongjingxiong commented on pull request #4036: HADOOP-18145.Decompress the ZIP file and retain the original file per…

2022-03-24 Thread GitBox


zhongjingxiong commented on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313810


   > Yes, if I set the suid flag, the file will be executed with the same 
permissions as the owner of the executable file. I think it's dangerous. So, in 
the code, any file with a suid flag will end up with only executable permission 
+x, this is only about rwx permissions.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18145) Fileutil's unzip method causes unzipped files to lose their original permissions

2022-03-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18145?focusedWorklogId=746979&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-746979
 ]

ASF GitHub Bot logged work on HADOOP-18145:
---

Author: ASF GitHub Bot
Created on: 24/Mar/22 07:21
Start Date: 24/Mar/22 07:21
Worklog Time Spent: 10m 
  Work Description: zhongjingxiong commented on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313260


   > 
   
   Yes, if I set the suid flag, the file will be executed with the same 
permissions as the owner of the executable file. I think it's dangerous. So, in 
the code, any file with a suid flag will end up with only executable permission 
+x, this is only about rwx permissions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 746979)
Time Spent: 6h  (was: 5h 50m)

> Fileutil's unzip method causes unzipped files to lose their original 
> permissions
> 
>
> Key: HADOOP-18145
> URL: https://issues.apache.org/jira/browse/HADOOP-18145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.2
>Reporter: jingxiong zhong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> When Spark decompresses the zip file, if the original file has the executable 
> permission, but the unzip method of FileUtil is invoked, the decompressed 
> file loses the executable permission, we should save the original permission



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhongjingxiong commented on pull request #4036: HADOOP-18145.Decompress the ZIP file and retain the original file per…

2022-03-24 Thread GitBox


zhongjingxiong commented on pull request #4036:
URL: https://github.com/apache/hadoop/pull/4036#issuecomment-1077313260


   > 
   
   Yes, if I set the suid flag, the file will be executed with the same 
permissions as the owner of the executable file. I think it's dangerous. So, in 
the code, any file with a suid flag will end up with only executable permission 
+x, this is only about rwx permissions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cndaimin opened a new pull request #4104: HDFS-16520. Improve EC pread: avoid potential reading whole block

2022-03-24 Thread GitBox


cndaimin opened a new pull request #4104:
URL: https://github.com/apache/hadoop/pull/4104


   HDFS client 'pread' represents 'position read', this kind of read just need 
a range of data instead of reading the whole file/block. By using 
BlockReaderFactory#setLength, client tells datanode the block length to be read 
from disk and sent to client.
   To EC file, the block length to read is not well set, by default using 
'block.getBlockSize() - offsetInBlock' to both pread and sread. Thus datanode 
read much more data and send to client, and abort when client closes 
connection. There is a lot waste of resource to this situation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #4103: YARN-10720. YARN WebAppProxyServlet should support connection timeout…

2022-03-24 Thread GitBox


aajisaka opened a new pull request #4103:
URL: https://github.com/apache/hadoop/pull/4103


   … to prevent proxy server from hanging. Contributed by Qi Zhu.
   
   (cherry picked from commit a0deda1a777d8967fb8c08ac976543cda895773d)
   
Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/WebAppProxyServlet.java

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
   
   (cherry picked from commit dbeb41b46a2cef9a48983add9c3b191915bd6de5)
   
Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/TestWebAppProxyServlet.java
   
   
   
   ### Description of PR
   
   Backport YARN-10720 to branch-2.10. This PR is to run the precommit job for 
this backport. I also opened #4102 to backport to branch-3.2.
   
   ### How was this patch tested?
   
   Ran TestWebAppProxyServlet locally.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - n/a Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - n/a If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - n/a If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org