[GitHub] [hadoop] hadoop-yetus commented on pull request #2325: YARN-10443: Document options of logs CLI

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2325:
URL: https://github.com/apache/hadoop/pull/2325#issuecomment-696535822


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 43s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  47m 52s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 11s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 32s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  67m 14s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2325/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2325 |
   | JIRA Issue | YARN-10443 |
   | Optional Tests | dupname asflicense mvnsite markdownlint |
   | uname | Linux 7291926c8f02 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b5d9e2334b |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2325/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #2326: HDFS-15590. namenode fails to start when ordered snapshot deletion feature is disabled

2020-09-21 Thread GitBox


bshashikant opened a new pull request #2326:
URL: https://github.com/apache/hadoop/pull/2326


   please see https://issues.apache.org/jira/browse/HDFS-15590
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] wojiaodoubao commented on pull request #1993: HADOOP-17021. Add concat fs command

2020-09-21 Thread GitBox


wojiaodoubao commented on pull request #1993:
URL: https://github.com/apache/hadoop/pull/1993#issuecomment-696015914


   Fix checkstyle and whitespace. Pending jenkins.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2102: HADOOP-13327. Specify Output Stream and Syncable

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-696394844







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ankit-kumar-25 opened a new pull request #2325: YARN-10443: Document options of logs CLI

2020-09-21 Thread GitBox


ankit-kumar-25 opened a new pull request #2325:
URL: https://github.com/apache/hadoop/pull/2325


   What? :: Document options of logs CLI
   
   https://issues.apache.org/jira/browse/YARN-10443
   
   @adamantal Can you please review this?
   
   Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11452) Make FileSystem.rename(path, path, options) public, specified, tested

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11452?focusedWorklogId=487733&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487733
 ]

ASF GitHub Bot logged work on HADOOP-11452:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:31
Start Date: 22/Sep/20 03:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #743:
URL: https://github.com/apache/hadoop/pull/743#issuecomment-689849388


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  compile  |   0m 35s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 22s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 34s |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-hdfs-client in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 35s |  hadoop-aws in trunk failed.  |
   | -1 :x: |  shadedclient  |  12m 52s |  branch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  12m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 23s |  the patch passed  |
   | -1 :x: |  compile  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   3m  8s |  root: The patch generated 587 
new + 0 unchanged - 0 fixed = 587 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   4m 43s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 8 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 20s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 3 new 
+ 1 unchanged - 0 fixed = 4 total (was 1)  |
   | -1 :x: |  findbugs  |   2m 15s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 24s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  3s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  96m 17s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-openstack in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 267m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Should org.apache.hadoop.fs.impl.RenameHelper$RenameValidationResult 
be a _static_ inner class?  At RenameHelper.java:inner

[GitHub] [hadoop] omalley commented on pull request #1830: HADOOP-11867: Add gather API to file system.

2020-09-21 Thread GitBox


omalley commented on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-696205838


   Yeah, I just added a suppression file for findbugs that hopefully will make 
Yetus happy. *Sigh* findbugs and generated code are not a good combination.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=487739&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487739
 ]

ASF GitHub Bot logged work on HADOOP-11867:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:31
Start Date: 22/Sep/20 03:31
Worklog Time Spent: 10m 
  Work Description: omalley commented on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-696205838


   Yeah, I just added a suppression file for findbugs that hopefully will make 
Yetus happy. *Sigh* findbugs and generated code are not a good combination.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487739)
Time Spent: 3h  (was: 2h 50m)

> FS API: Add a high-performance vectored Read to FSDataInputStream API
> -
>
> Key: HADOOP-11867
> URL: https://issues.apache.org/jira/browse/HADOOP-11867
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3, hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The most significant way to read from a filesystem in an efficient way is to 
> let the FileSystem implementation handle the seek behaviour underneath the 
> API to be the most efficient as possible.
> A better approach to the seek problem is to provide a sequence of read 
> locations as part of a single call, while letting the system schedule/plan 
> the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows 
> for potentially optimizing away the seek-gaps within the FSDataInputStream 
> implementation.
> For seek+read systems with even more latency than locally-attached disks, 
> something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would 
> take of the seeks internally while reading chunk.remaining() bytes into each 
> chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into 
> ByteBuffers, without forcing each FS implementation to override this in any 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?focusedWorklogId=487714&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487714
 ]

ASF GitHub Bot logged work on HADOOP-17277:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:29
Start Date: 22/Sep/20 03:29
Worklog Time Spent: 10m 
  Work Description: ferhui opened a new pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487714)
Time Spent: 1h 10m  (was: 1h)

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HADOOP-17277.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1830: HADOOP-11867: Add gather API to file system.

2020-09-21 Thread GitBox


mukund-thakur commented on a change in pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#discussion_r491856448



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AsyncReaderUtils.java
##
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.function.IntFunction;
+
+import org.apache.hadoop.fs.ByteBufferPositionedReadable;
+import org.apache.hadoop.fs.FileRange;
+import org.apache.hadoop.fs.PositionedReadable;
+
+public class AsyncReaderUtils {
+  /**
+   * Read fully a list of file ranges asynchronously from this file.
+   * The default iterates through the ranges to read each synchronously, but
+   * the intent is that subclasses can make more efficient readers.
+   * The data or exceptions are pushed into {@link FileRange#getData()}.
+   * @param stream the stream to read the data from
+   * @param ranges the byte ranges to read
+   * @param allocate the byte buffer allocation
+   * @param minimumSeek the minimum number of bytes to seek over
+   * @param maximumRead the largest number of bytes to combine into a single 
read
+   */
+  public static void readAsync(PositionedReadable stream,
+   List ranges,
+   IntFunction allocate,
+   int minimumSeek,
+   int maximumRead) {
+if (isOrderedDisjoint(ranges, 1, minimumSeek)) {
+  for(FileRange range: ranges) {
+range.setData(readRangeFrom(stream, range, allocate));
+  }
+} else {
+  for(CombinedFileRange range: sortAndMergeRanges(ranges, 1, minimumSeek,
+  maximumRead)) {
+CompletableFuture read =
+readRangeFrom(stream, range, allocate);
+for(FileRange child: range.getUnderlying()) {
+  child.setData(read.thenApply(
+  (b) -> sliceTo(b, range.getOffset(), child)));
+}
+  }
+}
+  }
+
+  /**
+   * Synchronously reads a range from the stream dealing with the combinations
+   * of ByteBuffers buffers and PositionedReadable streams.
+   * @param stream the stream to read from
+   * @param range the range to read
+   * @param allocate the function to allocate ByteBuffers
+   * @return the CompletableFuture that contains the read data
+   */
+  public static CompletableFuture readRangeFrom(PositionedReadable 
stream,
+FileRange range,
+
IntFunction allocate) {
+CompletableFuture result = new CompletableFuture<>();
+try {
+  ByteBuffer buffer = allocate.apply(range.getLength());
+  if (stream instanceof ByteBufferPositionedReadable) {
+((ByteBufferPositionedReadable) stream).readFully(range.getOffset(),
+buffer);
+buffer.flip();
+  } else {
+if (buffer.isDirect()) {
+  // if we need to read data from a direct buffer and the stream 
doesn't
+  // support it, we allocate a byte array to use.
+  byte[] tmp = new byte[range.getLength()];
+  stream.readFully(range.getOffset(), tmp, 0, tmp.length);
+  buffer.put(tmp);
+  buffer.flip();
+} else {
+  stream.readFully(range.getOffset(), buffer.array(),
+  buffer.arrayOffset(), range.getLength());
+}
+  }
+  result.complete(buffer);
+} catch (IOException ioe) {
+  result.completeExceptionally(ioe);
+}
+return result;
+  }
+
+  /**
+   * Is the given input list:
+   * 
+   *   already sorted by offset
+   *   each range is more than minimumSeek apart
+   *   the start and end of each range is a multiple of chunkSize
+   * 
+   *
+   * @param input the list of input ranges
+   * @param chunkSize the size of the chunks that the offset & end must align 
to
+   * @param minimumSeek the minimum distance between ranges
+   * @ret

[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=487612&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487612
 ]

ASF GitHub Bot logged work on HADOOP-11867:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:21
Start Date: 22/Sep/20 03:21
Worklog Time Spent: 10m 
  Work Description: omalley commented on a change in pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#discussion_r492174484



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AsyncReaderUtils.java
##
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.function.IntFunction;
+
+import org.apache.hadoop.fs.ByteBufferPositionedReadable;
+import org.apache.hadoop.fs.FileRange;
+import org.apache.hadoop.fs.PositionedReadable;
+
+public class AsyncReaderUtils {
+  /**
+   * Read fully a list of file ranges asynchronously from this file.
+   * The default iterates through the ranges to read each synchronously, but
+   * the intent is that subclasses can make more efficient readers.
+   * The data or exceptions are pushed into {@link FileRange#getData()}.
+   * @param stream the stream to read the data from
+   * @param ranges the byte ranges to read
+   * @param allocate the byte buffer allocation
+   * @param minimumSeek the minimum number of bytes to seek over
+   * @param maximumRead the largest number of bytes to combine into a single 
read
+   */
+  public static void readAsync(PositionedReadable stream,
+   List ranges,
+   IntFunction allocate,
+   int minimumSeek,
+   int maximumRead) {
+if (isOrderedDisjoint(ranges, 1, minimumSeek)) {
+  for(FileRange range: ranges) {
+range.setData(readRangeFrom(stream, range, allocate));
+  }
+} else {
+  for(CombinedFileRange range: sortAndMergeRanges(ranges, 1, minimumSeek,
+  maximumRead)) {
+CompletableFuture read =
+readRangeFrom(stream, range, allocate);
+for(FileRange child: range.getUnderlying()) {
+  child.setData(read.thenApply(
+  (b) -> sliceTo(b, range.getOffset(), child)));
+}
+  }
+}
+  }
+
+  /**
+   * Synchronously reads a range from the stream dealing with the combinations
+   * of ByteBuffers buffers and PositionedReadable streams.
+   * @param stream the stream to read from
+   * @param range the range to read
+   * @param allocate the function to allocate ByteBuffers
+   * @return the CompletableFuture that contains the read data
+   */
+  public static CompletableFuture readRangeFrom(PositionedReadable 
stream,
+FileRange range,
+
IntFunction allocate) {
+CompletableFuture result = new CompletableFuture<>();
+try {
+  ByteBuffer buffer = allocate.apply(range.getLength());
+  if (stream instanceof ByteBufferPositionedReadable) {
+((ByteBufferPositionedReadable) stream).readFully(range.getOffset(),
+buffer);
+buffer.flip();
+  } else {
+if (buffer.isDirect()) {
+  // if we need to read data from a direct buffer and the stream 
doesn't
+  // support it, we allocate a byte array to use.
+  byte[] tmp = new byte[range.getLength()];
+  stream.readFully(range.getOffset(), tmp, 0, tmp.length);
+  buffer.put(tmp);
+  buffer.flip();
+} else {
+  stream.readFully(range.getOffset(), buffer.array(),
+  buffer.arrayOffset(), range.getLength());
+}
+  }
+  result.complete(buffer);
+} catch (IOException ioe) {
+  result.completeExceptionally(ioe);
+}
+r

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #743: HADOOP-11452 make rename/3 public

2020-09-21 Thread GitBox


hadoop-yetus removed a comment on pull request #743:
URL: https://github.com/apache/hadoop/pull/743#issuecomment-689849388


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 26s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 30s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  compile  |   0m 35s |  root in trunk failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 22s |  The patch fails to run 
checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 34s |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-hdfs-client in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 35s |  hadoop-aws in trunk failed.  |
   | -1 :x: |  shadedclient  |  12m 52s |  branch has errors when building and 
testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 41s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  12m  7s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 23s |  the patch passed  |
   | -1 :x: |  compile  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  15m 11s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 27s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   3m  8s |  root: The patch generated 587 
new + 0 unchanged - 0 fixed = 587 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   4m 43s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 8 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 20s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 3 new 
+ 1 unchanged - 0 fixed = 4 total (was 1)  |
   | -1 :x: |  findbugs  |   2m 15s |  hadoop-common-project/hadoop-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 24s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  3s |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  |  96m 17s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 33s |  hadoop-openstack in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 267m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Should org.apache.hadoop.fs.impl.RenameHelper$RenameValidationResult 
be a _static_ inner class?  At RenameHelper.java:inner class?  At 
RenameHelper.java:[line 320] |
   | Failed junit tests | 
hadoop.fs.contract.rawlocal.TestRawlocalContractRenameEx |
   |   | hadoop.fs.viewfs.TestFcMainOperationsLocalFs |
   |   | hadoop.fs.TestSymlinkLocalFSFileSystem |
   |   | hadoop.fs.TestTrash |
   |   | hadoop.fs.TestFSMainOperationsLocalFileSystem |
   |   | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs |
   |   | hadoop.fs.viewfs.TestViewFsLocalFs |
   |

[GitHub] [hadoop] ayushtkn merged pull request #2266: HDFS-15554. RBF: force router check file existence in destinations before adding/updating mount points

2020-09-21 Thread GitBox


ayushtkn merged pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=487750&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487750
 ]

ASF GitHub Bot logged work on HADOOP-11867:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:32
Start Date: 22/Sep/20 03:32
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-696027447


   hey @omalley  -thanks for the update. Could you do _anything_ with the 
fields in AsyncBenchmark, as they are flooding yetus
   
   ```
   Unused field:AsyncBenchmark_BufferChoice_jmhType_B3.java
   ```
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487750)
Time Spent: 3h 10m  (was: 3h)

> FS API: Add a high-performance vectored Read to FSDataInputStream API
> -
>
> Key: HADOOP-11867
> URL: https://issues.apache.org/jira/browse/HADOOP-11867
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3, hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The most significant way to read from a filesystem in an efficient way is to 
> let the FileSystem implementation handle the seek behaviour underneath the 
> API to be the most efficient as possible.
> A better approach to the seek problem is to provide a sequence of read 
> locations as part of a single call, while letting the system schedule/plan 
> the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows 
> for potentially optimizing away the seek-gaps within the FSDataInputStream 
> implementation.
> For seek+read systems with even more latency than locally-attached disks, 
> something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would 
> take of the seeks internally while reading chunk.remaining() bytes into each 
> chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into 
> ByteBuffers, without forcing each FS implementation to override this in any 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=487589&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487589
 ]

ASF GitHub Bot logged work on HADOOP-13327:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:18
Start Date: 22/Sep/20 03:18
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2102:
URL: https://github.com/apache/hadoop/pull/2102#discussion_r492198720



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
##
@@ -629,10 +633,18 @@ The result is `FSDataOutputStream`, which through its 
operations may generate ne
  clients creating files with `overwrite==true` to fail if the file is created
  by another client between the two tests.
 
-* S3A, Swift and potentially other Object Stores do not currently change the 
FS state
+* S3A, Swift and potentially other Object Stores do not currently change the 
`FS` state
 until the output stream `close()` operation is completed.
-This MAY be a bug, as it allows >1 client to create a file with 
`overwrite==false`,
- and potentially confuse file/directory logic
+This is a significant difference between the behavior of object stores
+and that of filesystems, as it allows >1 client to create a file with 
`overwrite==false`,

Review comment:
   changed to ==; also replace >1 with > to keep everything happy





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487589)
Time Spent: 1.5h  (was: 1h 20m)

> Add OutputStream + Syncable to the Filesystem Specification
> ---
>
> Key: HADOOP-13327
> URL: https://issues.apache.org/jira/browse/HADOOP-13327
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, 
> HADOOP-13327-branch-2-001.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Write down what a Filesystem output stream should do. While core the API is 
> defined in Java, that doesn't say what's expected about visibility, 
> durability, etc —and Hadoop Syncable interface is entirely ours to define.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] NickyYe commented on a change in pull request #2274: HDFS-15557. Log the reason why a storage log file can't be deleted

2020-09-21 Thread GitBox


NickyYe commented on a change in pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274#discussion_r491797152



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/AtomicFileOutputStream.java
##
@@ -75,8 +76,13 @@ public void close() throws IOException {
 boolean renamed = tmpFile.renameTo(origFile);
 if (!renamed) {
   // On windows, renameTo does not replace.
-  if (origFile.exists() && !origFile.delete()) {
-throw new IOException("Could not delete original file " + 
origFile);
+  if (origFile.exists()) {
+try {
+  Files.delete(origFile.toPath());
+} catch (IOException e) {
+  throw new IOException("Could not delete original file " + 
origFile

Review comment:
   Fixed. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on pull request #2301: HADOOP-17259. Allow SSLFactory fallback to input config if ssl-*.xml …

2020-09-21 Thread GitBox


xiaoyuyao commented on pull request #2301:
URL: https://github.com/apache/hadoop/pull/2301#issuecomment-696330056


   Thanks @steveloughran for the review. I'll merge it shortly.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2319: MAPREDUCE-7294. Only application master should upload resource to Yarn Shared Cache.

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2319:
URL: https://github.com/apache/hadoop/pull/2319#issuecomment-695964909







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17255) JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17255?focusedWorklogId=487703&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487703
 ]

ASF GitHub Bot logged work on HADOOP-17255:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:28
Start Date: 22/Sep/20 03:28
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2291:
URL: https://github.com/apache/hadoop/pull/2291#issuecomment-696306527


   +1 for the better errors. You know in #743 I'm going to make rename/3 public 
in FileSystem too, so we can move the code in distcp etc to using it?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487703)
Time Spent: 40m  (was: 0.5h)

> JavaKeyStoreProvider fails to create a new key if the keystore is HDFS
> --
>
> Key: HADOOP-17255
> URL: https://issues.apache.org/jira/browse/HADOOP-17255
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The caller of JavaKeyStoreProvider#renameOrFail assumes that it throws 
> FileNotFoundException if the src does not exist. However, 
> JavaKeyStoreProvider#renameOrFail calls the old rename API. In 
> DistributedFileSystem, the old API returns false if the src does not exist.
> That way JavaKeyStoreProvider fails to create a new key if the keystore is 
> HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=487752&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487752
 ]

ASF GitHub Bot logged work on HADOOP-11867:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:32
Start Date: 22/Sep/20 03:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-695162046







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487752)
Time Spent: 3h 20m  (was: 3h 10m)

> FS API: Add a high-performance vectored Read to FSDataInputStream API
> -
>
> Key: HADOOP-11867
> URL: https://issues.apache.org/jira/browse/HADOOP-11867
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3, hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The most significant way to read from a filesystem in an efficient way is to 
> let the FileSystem implementation handle the seek behaviour underneath the 
> API to be the most efficient as possible.
> A better approach to the seek problem is to provide a sequence of read 
> locations as part of a single call, while letting the system schedule/plan 
> the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows 
> for potentially optimizing away the seek-gaps within the FSDataInputStream 
> implementation.
> For seek+read systems with even more latency than locally-attached disks, 
> something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would 
> take of the seeks internally while reading chunk.remaining() bytes into each 
> chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into 
> ByteBuffers, without forcing each FS implementation to override this in any 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=487904&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487904
 ]

ASF GitHub Bot logged work on HADOOP-11867:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:44
Start Date: 22/Sep/20 03:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-696332135


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 18s |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m  1s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m 51s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 24s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 48s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  20m 33s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 59s |  hadoop-common-project: The patch 
generated 32 new + 90 unchanged - 4 fixed = 122 total (was 94)  |
   | +1 :green_heart: |  mvnsite  |   4m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  6s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 37s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 12s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  17m 29s |  hadoop-common-project in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 28s |  benchmark in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 47s |  The patch generated 18 ASF License 
warnings.  |
   |  |   | 237m  0s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.crypto.key.kms.server.TestKMS |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1830/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1830 |
   | JIRA Issue | HADOOP-11867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 252f9a642739 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7a6265ac425 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Priv

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2257: HADOOP-17023 Tune S3AFileSystem.listStatus() api.

2020-09-21 Thread GitBox


hadoop-yetus removed a comment on pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257#issuecomment-694312667


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 21s |  hadoop-tools/hadoop-aws: The 
patch generated 4 new + 62 unchanged - 0 fixed = 66 total (was 62)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 25s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  71m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2257 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 177d1c552191 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 20a0e6278d6 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/testReport/ |
   | Max. process+thread count | 456 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.or

[GitHub] [hadoop] steveloughran merged pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…

2020-09-21 Thread GitBox


steveloughran merged pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=487586&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487586
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:18
Start Date: 22/Sep/20 03:18
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-696286408


   Closing this; best to rebuild as a new PR atop trunk.
   
   For the next PR I plan to split into
   * hadoop-common
   * hadoop-aws
   
   trickier than I'd like, but, it means we can get the common one reviewed and 
in early



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487586)
Time Spent: 2h 40m  (was: 2.5h)

> Add public IOStatistics API
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-696415310


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  2s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
39 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   4m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 49s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  30m 51s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  30m 51s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2058 unchanged - 
1 fixed = 2058 total (was 2059)  |
   | +1 :green_heart: |  compile  |  25m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  25m 33s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1952 unchanged - 
1 fixed = 1952 total (was 1953)  |
   | -0 :warning: |  checkstyle  |   3m 46s |  root: The patch generated 25 new 
+ 267 unchanged - 25 fixed = 292 total (was 292)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 58s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 47s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   3m 46s |  hadoop-common-project/hadoop-common 
generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 42s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   7m 23s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 48s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 245m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.counters; locked 60% of 
time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 183] |
   |  |  Inconsistent synchronization of

[GitHub] [hadoop] xiaoyuyao merged pull request #2301: HADOOP-17259. Allow SSLFactory fallback to input config if ssl-*.xml …

2020-09-21 Thread GitBox


xiaoyuyao merged pull request #2301:
URL: https://github.com/apache/hadoop/pull/2301


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17088) Failed to load XInclude files with relative path.

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17088?focusedWorklogId=487726&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487726
 ]

ASF GitHub Bot logged work on HADOOP-17088:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:30
Start Date: 22/Sep/20 03:30
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487726)
Time Spent: 1.5h  (was: 1h 20m)

> Failed to load XInclude files with relative path.
> -
>
> Key: HADOOP-17088
> URL: https://issues.apache.org/jira/browse/HADOOP-17088
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.1
>Reporter: Yushi Hayasaka
>Assignee: Yushi Hayasaka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When we create a configuration file, which load a external XML file with 
> relative path, and try to load it via calling `Configuration.addResource` 
> with `Path(URI)`, we got an error, which failed to load a external XML, after 
> https://issues.apache.org/jira/browse/HADOOP-14216 is merged.
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: java.io.IOException: 
> Fetch fail on include for 'mountTable.xml' with no fallback while loading 
> 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
>   at 
> org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896)
>   at com.company.test.Main.main(Main.java:29)
> Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' 
> with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007)
>   ... 4 more
> {noformat}
> The cause is that the URI is passed as string to java.io.File constructor and 
> File does not support the file URI, so my suggestion is trying to convert 
> from string to URI at first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2102: HADOOP-13327. Specify Output Stream and Syncable

2020-09-21 Thread GitBox


hadoop-yetus removed a comment on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-650435353


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 31s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 37s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  5s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 28s |  hadoop-azure-datalake in trunk failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 43s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 23s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  22m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 34s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  18m 34s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 57s |  root: The patch generated 2 new 
+ 105 unchanged - 4 fixed = 107 total (was 109)  |
   | +1 :green_heart: |  mvnsite  |   3m 55s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  5s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-azure-datalake in the patch failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   7m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 30s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  92m  4s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 30s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m  4s |  hadoop-azure-datalake in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 285m 26s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.contract.rawlocal.TestRawlocalContractCreate |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestDFSInputStream |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.TestGetFileChecksum |
   
   
   | Subsys

[GitHub] [hadoop] steveloughran commented on a change in pull request #2102: HADOOP-13327. Specify Output Stream and Syncable

2020-09-21 Thread GitBox


steveloughran commented on a change in pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#discussion_r492198720



##
File path: 
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
##
@@ -629,10 +633,18 @@ The result is `FSDataOutputStream`, which through its 
operations may generate ne
  clients creating files with `overwrite==true` to fail if the file is created
  by another client between the two tests.
 
-* S3A, Swift and potentially other Object Stores do not currently change the 
FS state
+* S3A, Swift and potentially other Object Stores do not currently change the 
`FS` state
 until the output stream `close()` operation is completed.
-This MAY be a bug, as it allows >1 client to create a file with 
`overwrite==false`,
- and potentially confuse file/directory logic
+This is a significant difference between the behavior of object stores
+and that of filesystems, as it allows >1 client to create a file with 
`overwrite==false`,

Review comment:
   changed to ==; also replace >1 with > to keep everything happy





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #1830: HADOOP-11867: Add gather API to file system.

2020-09-21 Thread GitBox


hadoop-yetus removed a comment on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-695162046







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=487671&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487671
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:26
Start Date: 22/Sep/20 03:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-696415310


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  2s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
39 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   4m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 49s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  30m 51s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  30m 51s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2058 unchanged - 
1 fixed = 2058 total (was 2059)  |
   | +1 :green_heart: |  compile  |  25m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  25m 33s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1952 unchanged - 
1 fixed = 1952 total (was 1953)  |
   | -0 :warning: |  checkstyle  |   3m 46s |  root: The patch generated 25 new 
+ 267 unchanged - 25 fixed = 292 total (was 292)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 58s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 47s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   3m 46s |  hadoop-common-project/hadoop-common 
generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 42s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   7m 23s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 48s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  The patch does not generate 
ASF License warnings.  |
  

[GitHub] [hadoop] steveloughran commented on pull request #2320: MAPREDUCE-7282: remove v2 commit algorithm for correctness reasons.

2020-09-21 Thread GitBox


steveloughran commented on pull request #2320:
URL: https://github.com/apache/hadoop/pull/2320#issuecomment-696284119


   checkstyle: indentation
   
   ```
   
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java:593:
   if (!fs.delete(committedTaskPath, true)) {: 'if' has incorrect 
indentation level 11, expected level should be 10. [Indentation]
   
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java:594:
 throw new IOException("Could not delete " + committedTaskPath);: 
'if' child has incorrect indentation level 13, expected level should be 12. 
[Indentation]
   
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java:595:
   }: 'if rcurly' has incorrect indentation level 11, expected level 
should be 10. [Indentation]
   
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java:697:
  throw new IOException("Could not rename " + previousCommittedTaskPath 
+: Line is longer than 80 characters (found 81). [LineLength]
   
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java:701:
  LOG.warn(attemptId+" had no output to recover.");: 'else' child has 
incorrect indentation level 10, expected level should be 8. [Indentation]
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #1830: HADOOP-11867: Add gather API to file system.

2020-09-21 Thread GitBox


steveloughran commented on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-696027447


   hey @omalley  -thanks for the update. Could you do _anything_ with the 
fields in AsyncBenchmark, as they are flooding yetus
   
   ```
   Unused field:AsyncBenchmark_BufferChoice_jmhType_B3.java
   ```
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=487805&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487805
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:37
Start Date: 22/Sep/20 03:37
Worklog Time Spent: 10m 
  Work Description: steveloughran closed pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487805)
Time Spent: 2h 50m  (was: 2h 40m)

> Add public IOStatistics API
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17023) Tune listStatus() api of s3a.

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17023?focusedWorklogId=487710&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487710
 ]

ASF GitHub Bot logged work on HADOOP-17023:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:29
Start Date: 22/Sep/20 03:29
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257#issuecomment-696211236


   LGTM. +1 
   
   -thank you for a great piece of work here!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487710)
Time Spent: 3h 20m  (was: 3h 10m)

> Tune listStatus() api of s3a.
> -
>
> Key: HADOOP-17023
> URL: https://issues.apache.org/jira/browse/HADOOP-17023
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Similar optimisation which was done for listLocatedSttaus() 
> https://issues.apache.org/jira/browse/HADOOP-16465  can done for listStatus() 
> api as well. 
> This is going to reduce the number of remote calls in case of directory 
> listing.
>  
> CC [~ste...@apache.org] [~shwethags]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=487712&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487712
 ]

ASF GitHub Bot logged work on HADOOP-11867:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:29
Start Date: 22/Sep/20 03:29
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on a change in pull request 
#1830:
URL: https://github.com/apache/hadoop/pull/1830#discussion_r491856448



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AsyncReaderUtils.java
##
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.function.IntFunction;
+
+import org.apache.hadoop.fs.ByteBufferPositionedReadable;
+import org.apache.hadoop.fs.FileRange;
+import org.apache.hadoop.fs.PositionedReadable;
+
+public class AsyncReaderUtils {
+  /**
+   * Read fully a list of file ranges asynchronously from this file.
+   * The default iterates through the ranges to read each synchronously, but
+   * the intent is that subclasses can make more efficient readers.
+   * The data or exceptions are pushed into {@link FileRange#getData()}.
+   * @param stream the stream to read the data from
+   * @param ranges the byte ranges to read
+   * @param allocate the byte buffer allocation
+   * @param minimumSeek the minimum number of bytes to seek over
+   * @param maximumRead the largest number of bytes to combine into a single 
read
+   */
+  public static void readAsync(PositionedReadable stream,
+   List ranges,
+   IntFunction allocate,
+   int minimumSeek,
+   int maximumRead) {
+if (isOrderedDisjoint(ranges, 1, minimumSeek)) {
+  for(FileRange range: ranges) {
+range.setData(readRangeFrom(stream, range, allocate));
+  }
+} else {
+  for(CombinedFileRange range: sortAndMergeRanges(ranges, 1, minimumSeek,
+  maximumRead)) {
+CompletableFuture read =
+readRangeFrom(stream, range, allocate);
+for(FileRange child: range.getUnderlying()) {
+  child.setData(read.thenApply(
+  (b) -> sliceTo(b, range.getOffset(), child)));
+}
+  }
+}
+  }
+
+  /**
+   * Synchronously reads a range from the stream dealing with the combinations
+   * of ByteBuffers buffers and PositionedReadable streams.
+   * @param stream the stream to read from
+   * @param range the range to read
+   * @param allocate the function to allocate ByteBuffers
+   * @return the CompletableFuture that contains the read data
+   */
+  public static CompletableFuture readRangeFrom(PositionedReadable 
stream,
+FileRange range,
+
IntFunction allocate) {
+CompletableFuture result = new CompletableFuture<>();
+try {
+  ByteBuffer buffer = allocate.apply(range.getLength());
+  if (stream instanceof ByteBufferPositionedReadable) {
+((ByteBufferPositionedReadable) stream).readFully(range.getOffset(),
+buffer);
+buffer.flip();
+  } else {
+if (buffer.isDirect()) {
+  // if we need to read data from a direct buffer and the stream 
doesn't
+  // support it, we allocate a byte array to use.
+  byte[] tmp = new byte[range.getLength()];
+  stream.readFully(range.getOffset(), tmp, 0, tmp.length);
+  buffer.put(tmp);
+  buffer.flip();
+} else {
+  stream.readFully(range.getOffset(), buffer.array(),
+  buffer.arrayOffset(), range.getLength());
+}
+  }
+  result.complete(buffer);
+} catch (IOException ioe) {
+  result.completeExceptionally(ioe);
+}

[GitHub] [hadoop] goiri commented on pull request #2274: HDFS-15557. Log the reason why a storage log file can't be deleted

2020-09-21 Thread GitBox


goiri commented on pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274#issuecomment-696264939


   Not sure why the build came out so badly... let's see if we can retrigger.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2257: HADOOP-17023 Tune S3AFileSystem.listStatus() api.

2020-09-21 Thread GitBox


steveloughran commented on pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257#issuecomment-696211236


   LGTM. +1 
   
   -thank you for a great piece of work here!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2281: HDFS-15516.Add info for create flags in NameNode audit logs.

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2281:
URL: https://github.com/apache/hadoop/pull/2281#issuecomment-695807876


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 14s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 13s |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 59s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 54s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 57s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 43s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 13s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 13s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 42s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 42s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 109m  0s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-dynamometer-workload in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 284m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2281 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5825ccc227ce 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 95dfc875d32 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/6/testReport/ |
   | Max. process+thread count | 2930 (v

[jira] [Work logged] (HADOOP-17259) Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17259?focusedWorklogId=487699&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487699
 ]

ASF GitHub Bot logged work on HADOOP-17259:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:28
Start Date: 22/Sep/20 03:28
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao merged pull request #2301:
URL: https://github.com/apache/hadoop/pull/2301


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487699)
Time Spent: 2h  (was: 1h 50m)

> Allow SSLFactory fallback to input config if ssl-*.xml fail to load from 
> classpath
> --
>
> Key: HADOOP-17259
> URL: https://issues.apache.org/jira/browse/HADOOP-17259
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Some applications like Tez does not have ssl-client.xml and ssl-server.xml in 
> classpath. Instead, it directly pass the parsed SSL configuration as the 
> input configuration object. This ticket is opened to allow this case. 
> TEZ-4096 attempts to solve this issue but but take a different approach which 
> may not work in existing Hadoop clients that use SSLFactory from 
> hadoop-common. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2291: HADOOP-17255. JavaKeyStoreProvider fails to create a new key if the keystore is HDFS

2020-09-21 Thread GitBox


steveloughran commented on pull request #2291:
URL: https://github.com/apache/hadoop/pull/2291#issuecomment-696306527


   +1 for the better errors. You know in #743 I'm going to make rename/3 public 
in FileSystem too, so we can move the code in distcp etc to using it?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17267) Add debug-level logs in Filesystem#close

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17267?focusedWorklogId=487668&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487668
 ]

ASF GitHub Bot logged work on HADOOP-17267:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:26
Start Date: 22/Sep/20 03:26
Worklog Time Spent: 10m 
  Work Description: klcopp opened a new pull request #2321:
URL: https://github.com/apache/hadoop/pull/2321


   HDFS reuses the same cached FileSystem object across the file system. If the 
client calls FileSystem.close(), closeAllForUgi(), or closeAll() (if it applies 
to the instance) anywhere in the system it purges the cache of that FS 
instance, and trying to use the instance results in an IOException: FileSystem 
closed.
   
   It would be a great help to clients to see where and when a given FS 
instance was closed. I.e. in close(), closeAllForUgi(), or closeAll(), it would 
be great to see a DEBUG-level log of
   - calling method name, class, file name/line number
   - FileSystem object's identity hash (FileSystem.close() only)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487668)
Time Spent: 50m  (was: 40m)

> Add debug-level logs in Filesystem#close
> 
>
> Key: HADOOP-17267
> URL: https://issues.apache.org/jira/browse/HADOOP-17267
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Karen Coppage
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDFS reuses the same cached FileSystem object across the file system. If the 
> client calls FileSystem.close(), closeAllForUgi(), or closeAll() (if it 
> applies to the instance) anywhere in the system it purges the cache of that 
> FS instance, and trying to use the instance results in an IOException: 
> FileSystem closed.
> It would be a great help to clients to see where and when a given FS instance 
> was closed. I.e. in close(), closeAllForUgi(), or closeAll(), it would be 
> great to see a DEBUG-level log of
>  * calling method name, class, file name/line number
>  * FileSystem object's identity hash (FileSystem.close() only)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on a change in pull request #2322: HADOOP-17277. Correct spelling errors for separator

2020-09-21 Thread GitBox


ferhui commented on a change in pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#discussion_r492428800



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/NativeTask.h
##
@@ -59,8 +59,8 @@ enum Endium {
 #define NATIVE_HADOOP_VERSION "native.hadoop.version"
 
 #define NATIVE_INPUT_SPLIT "native.input.split"
-#define INPUT_LINE_KV_SEPERATOR 
"mapreduce.input.keyvaluelinerecordreader.key.value.separator"

Review comment:
   Thanks! Leave these, Have one more commit to fix it!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-21 Thread GitBox


steveloughran commented on pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069#issuecomment-696286408


   Closing this; best to rebuild as a new PR atop trunk.
   
   For the next PR I plan to split into
   * hadoop-common
   * hadoop-aws
   
   trickier than I'd like, but, it means we can get the common one reviewed and 
in early



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17259) Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17259?focusedWorklogId=487837&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487837
 ]

ASF GitHub Bot logged work on HADOOP-17259:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:39
Start Date: 22/Sep/20 03:39
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #2301:
URL: https://github.com/apache/hadoop/pull/2301#issuecomment-696330056


   Thanks @steveloughran for the review. I'll merge it shortly.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487837)
Time Spent: 2h 10m  (was: 2h)

> Allow SSLFactory fallback to input config if ssl-*.xml fail to load from 
> classpath
> --
>
> Key: HADOOP-17259
> URL: https://issues.apache.org/jira/browse/HADOOP-17259
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Some applications like Tez does not have ssl-client.xml and ssl-server.xml in 
> classpath. Instead, it directly pass the parsed SSL configuration as the 
> input configuration object. This ticket is opened to allow this case. 
> TEZ-4096 attempts to solve this issue but but take a different approach which 
> may not work in existing Hadoop clients that use SSLFactory from 
> hadoop-common. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=487516&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487516
 ]

ASF GitHub Bot logged work on HADOOP-13327:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:12
Start Date: 22/Sep/20 03:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-696394844







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487516)
Time Spent: 1h 10m  (was: 1h)

> Add OutputStream + Syncable to the Filesystem Specification
> ---
>
> Key: HADOOP-13327
> URL: https://issues.apache.org/jira/browse/HADOOP-13327
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, 
> HADOOP-13327-branch-2-001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Write down what a Filesystem output stream should do. While core the API is 
> defined in Java, that doesn't say what's expected about visibility, 
> durability, etc —and Hadoop Syncable interface is entirely ours to define.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-09-21 Thread GitBox


liuml07 commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-696401277


   156 files being changed makes this a fun patch. I can review this or next 
week (if not committed yet). Don't get blocked by my review 😄 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=487559&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487559
 ]

ASF GitHub Bot logged work on HADOOP-13327:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:16
Start Date: 22/Sep/20 03:16
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-696236937


   @joshelser -rebased and addressed your minor points
   
   I think you are right, we don't need separate HSYNC/HFLUSH options, it's 
just that they've been there (separately) for a while.
   
   What to do?
   
   * maintain as is
   * add SYNCABLE which says "both"
   * Declare that HSYNC is sufficient and the only one you should look for. If 
you implement HFLUSH and not HSYNC, you are of no use to applications. retain 
support for HFLUSH probe, but in the spec only mention it as of historical 
interest. 
   
   I like option 3, now I think about it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487559)
Time Spent: 1h 20m  (was: 1h 10m)

> Add OutputStream + Syncable to the Filesystem Specification
> ---
>
> Key: HADOOP-13327
> URL: https://issues.apache.org/jira/browse/HADOOP-13327
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, 
> HADOOP-13327-branch-2-001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Write down what a Filesystem output stream should do. While core the API is 
> defined in Java, that doesn't say what's expected about visibility, 
> durability, etc —and Hadoop Syncable interface is entirely ours to define.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on a change in pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-21 Thread GitBox


liuml07 commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r491857797



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockStatsMXBean.java
##
@@ -145,9 +150,11 @@ public void testStorageTypeStatsJMX() throws Exception {
   Map storageTypeStats = 
(Map)entry.get("value");
   typesPresent.add(storageType);
   if (storageType.equals("ARCHIVE") || storageType.equals("DISK") ) {
-assertEquals(3l, storageTypeStats.get("nodesInService"));
+assertEquals(3L, storageTypeStats.get("nodesInService"));

Review comment:
   I have not used Java 7 for a while, but I remember vaguely this is 
actually supported?
   
   https://docs.oracle.com/javase/specs/jls/se7/html/jls-14.html
   
   

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockStatsMXBean.java
##
@@ -145,9 +150,11 @@ public void testStorageTypeStatsJMX() throws Exception {
   Map storageTypeStats = 
(Map)entry.get("value");
   typesPresent.add(storageType);
   if (storageType.equals("ARCHIVE") || storageType.equals("DISK") ) {
-assertEquals(3l, storageTypeStats.get("nodesInService"));
+assertEquals(3L, storageTypeStats.get("nodesInService"));

Review comment:
   Hadoop releases before 2.10 are all end of life (EoL). Hadoop 2.10 is 
the only version using Java 7. We do not need any support, compile or runtime, 
for Java versions before Java 7.
   
   Hadoop 3.x are all using Java 8+. We do not need any Java 7 support in 
Hadoop 3.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-11867) FS API: Add a high-performance vectored Read to FSDataInputStream API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11867?focusedWorklogId=487783&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487783
 ]

ASF GitHub Bot logged work on HADOOP-11867:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:35
Start Date: 22/Sep/20 03:35
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#1830:
URL: https://github.com/apache/hadoop/pull/1830#discussion_r491936825



##
File path: 
hadoop-common-project/benchmark/src/main/java/org/apache/hadoop/benchmark/AsyncBenchmark.java
##
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.benchmark;
+
+import org.apache.hadoop.conf.Configuration;

Review comment:
   review import ordering





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487783)
Time Spent: 3.5h  (was: 3h 20m)

> FS API: Add a high-performance vectored Read to FSDataInputStream API
> -
>
> Key: HADOOP-11867
> URL: https://issues.apache.org/jira/browse/HADOOP-11867
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3, hdfs-client
>Affects Versions: 3.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> The most significant way to read from a filesystem in an efficient way is to 
> let the FileSystem implementation handle the seek behaviour underneath the 
> API to be the most efficient as possible.
> A better approach to the seek problem is to provide a sequence of read 
> locations as part of a single call, while letting the system schedule/plan 
> the reads ahead of time.
> This is exceedingly useful for seek-heavy readers on HDFS, since this allows 
> for potentially optimizing away the seek-gaps within the FSDataInputStream 
> implementation.
> For seek+read systems with even more latency than locally-attached disks, 
> something like a {{readFully(long[] offsets, ByteBuffer[] chunks)}} would 
> take of the seeks internally while reading chunk.remaining() bytes into each 
> chunk (which may be {{slice()}}ed off a bigger buffer).
> The base implementation can stub in this as a sequence of seeks + read() into 
> ByteBuffers, without forcing each FS implementation to override this in any 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17021) Add concat fs command

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17021?focusedWorklogId=487523&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487523
 ]

ASF GitHub Bot logged work on HADOOP-17021:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:12
Start Date: 22/Sep/20 03:12
Worklog Time Spent: 10m 
  Work Description: wojiaodoubao commented on pull request #1993:
URL: https://github.com/apache/hadoop/pull/1993#issuecomment-696015914


   Fix checkstyle and whitespace. Pending jenkins.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487523)
Time Spent: 2h  (was: 1h 50m)

> Add concat fs command
> -
>
> Key: HADOOP-17021
> URL: https://issues.apache.org/jira/browse/HADOOP-17021
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-17021.001.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We should add one concat fs command for ease of use. It concatenates existing 
> source files into the target file using FileSystem.concat().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=487836&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487836
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:39
Start Date: 22/Sep/20 03:39
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-696401277


   156 files being changed makes this a fun patch. I can review this or next 
week (if not committed yet). Don't get blocked by my review 😄 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487836)
Time Spent: 1h  (was: 50m)

> S3A statistics to support IOStatistics
> --
>
> Key: HADOOP-17271
> URL: https://issues.apache.org/jira/browse/HADOOP-17271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> S3A to rework statistics with
> * API + Implementation split of the interfaces used by subcomponents when 
> reporting stats
> * S3A Instrumentation to implement all the interfaces
> * streams, etc to all implement IOStatisticsSources and serve to callers
> * Add some tracking of durations of remote requests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] omalley commented on a change in pull request #1830: HADOOP-11867: Add gather API to file system.

2020-09-21 Thread GitBox


omalley commented on a change in pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#discussion_r492174484



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AsyncReaderUtils.java
##
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.impl;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.function.IntFunction;
+
+import org.apache.hadoop.fs.ByteBufferPositionedReadable;
+import org.apache.hadoop.fs.FileRange;
+import org.apache.hadoop.fs.PositionedReadable;
+
+public class AsyncReaderUtils {
+  /**
+   * Read fully a list of file ranges asynchronously from this file.
+   * The default iterates through the ranges to read each synchronously, but
+   * the intent is that subclasses can make more efficient readers.
+   * The data or exceptions are pushed into {@link FileRange#getData()}.
+   * @param stream the stream to read the data from
+   * @param ranges the byte ranges to read
+   * @param allocate the byte buffer allocation
+   * @param minimumSeek the minimum number of bytes to seek over
+   * @param maximumRead the largest number of bytes to combine into a single 
read
+   */
+  public static void readAsync(PositionedReadable stream,
+   List ranges,
+   IntFunction allocate,
+   int minimumSeek,
+   int maximumRead) {
+if (isOrderedDisjoint(ranges, 1, minimumSeek)) {
+  for(FileRange range: ranges) {
+range.setData(readRangeFrom(stream, range, allocate));
+  }
+} else {
+  for(CombinedFileRange range: sortAndMergeRanges(ranges, 1, minimumSeek,
+  maximumRead)) {
+CompletableFuture read =
+readRangeFrom(stream, range, allocate);
+for(FileRange child: range.getUnderlying()) {
+  child.setData(read.thenApply(
+  (b) -> sliceTo(b, range.getOffset(), child)));
+}
+  }
+}
+  }
+
+  /**
+   * Synchronously reads a range from the stream dealing with the combinations
+   * of ByteBuffers buffers and PositionedReadable streams.
+   * @param stream the stream to read from
+   * @param range the range to read
+   * @param allocate the function to allocate ByteBuffers
+   * @return the CompletableFuture that contains the read data
+   */
+  public static CompletableFuture readRangeFrom(PositionedReadable 
stream,
+FileRange range,
+
IntFunction allocate) {
+CompletableFuture result = new CompletableFuture<>();
+try {
+  ByteBuffer buffer = allocate.apply(range.getLength());
+  if (stream instanceof ByteBufferPositionedReadable) {
+((ByteBufferPositionedReadable) stream).readFully(range.getOffset(),
+buffer);
+buffer.flip();
+  } else {
+if (buffer.isDirect()) {
+  // if we need to read data from a direct buffer and the stream 
doesn't
+  // support it, we allocate a byte array to use.
+  byte[] tmp = new byte[range.getLength()];
+  stream.readFully(range.getOffset(), tmp, 0, tmp.length);
+  buffer.put(tmp);
+  buffer.flip();
+} else {
+  stream.readFully(range.getOffset(), buffer.array(),
+  buffer.arrayOffset(), range.getLength());
+}
+  }
+  result.complete(buffer);
+} catch (IOException ioe) {
+  result.completeExceptionally(ioe);
+}
+return result;
+  }
+
+  /**
+   * Is the given input list:
+   * 
+   *   already sorted by offset
+   *   each range is more than minimumSeek apart
+   *   the start and end of each range is a multiple of chunkSize
+   * 
+   *
+   * @param input the list of input ranges
+   * @param chunkSize the size of the chunks that the offset & end must align 
to
+   * @param minimumSeek the minimum distance between ranges
+   * @return tr

[jira] [Work logged] (HADOOP-17023) Tune listStatus() api of s3a.

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17023?focusedWorklogId=487832&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487832
 ]

ASF GitHub Bot logged work on HADOOP-17023:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:38
Start Date: 22/Sep/20 03:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257#issuecomment-694312667


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  28m 50s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  2s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 21s |  hadoop-tools/hadoop-aws: The 
patch generated 4 new + 62 unchanged - 0 fixed = 66 total (was 62)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  2s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   0m 25s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  71m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2257 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 177d1c552191 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 20a0e6278d6 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2257/4/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   |  Test Results | 
htt

[jira] [Work logged] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?focusedWorklogId=487921&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487921
 ]

ASF GitHub Bot logged work on HADOOP-17277:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:46
Start Date: 22/Sep/20 03:46
Worklog Time Spent: 10m 
  Work Description: ferhui commented on a change in pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#discussion_r492428800



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/NativeTask.h
##
@@ -59,8 +59,8 @@ enum Endium {
 #define NATIVE_HADOOP_VERSION "native.hadoop.version"
 
 #define NATIVE_INPUT_SPLIT "native.input.split"
-#define INPUT_LINE_KV_SEPERATOR 
"mapreduce.input.keyvaluelinerecordreader.key.value.separator"

Review comment:
   Thanks! Leave these, Have one more commit to fix it!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487921)
Time Spent: 1h 20m  (was: 1h 10m)

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HADOOP-17277.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17267) Add debug-level logs in Filesystem#close

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17267?focusedWorklogId=487826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487826
 ]

ASF GitHub Bot logged work on HADOOP-17267:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:38
Start Date: 22/Sep/20 03:38
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2321:
URL: https://github.com/apache/hadoop/pull/2321#discussion_r492270963



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -621,6 +621,9 @@ public static LocalFileSystem 
newInstanceLocal(Configuration conf)
* @throws IOException a problem arose closing one or more filesystem.
*/
   public static void closeAll() throws IOException {
+if (LOGGER.isDebugEnabled()) {
+  debugLogFileSystemClose("closeAll", null);

Review comment:
   pass in "" instead of null and theres' no need for the ? : below

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -632,9 +635,21 @@ public static void closeAll() throws IOException {
*/
   public static void closeAllForUGI(UserGroupInformation ugi)
   throws IOException {
+if (LOGGER.isDebugEnabled()) {
+  debugLogFileSystemClose("closeAllForUGI", "UGI: " + ugi.toString());

Review comment:
   just use ugi and let the automatic to.String to the work

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -632,9 +635,21 @@ public static void closeAll() throws IOException {
*/
   public static void closeAllForUGI(UserGroupInformation ugi)
   throws IOException {
+if (LOGGER.isDebugEnabled()) {
+  debugLogFileSystemClose("closeAllForUGI", "UGI: " + ugi.toString());
+}
 CACHE.closeAll(ugi);
   }
 
+  private static void debugLogFileSystemClose(String methodName, String 
additionalInfo) {
+StackTraceElement callingMethod = new 
Throwable().fillInStackTrace().getStackTrace()[2];
+LOGGER.debug(
+"FileSystem." + methodName + "() called by method: "

Review comment:
   Prefer SLF4J {} and the values as varargs.
   You might want to think that at TRACE the entire throwable is logged. Why 
so? one level up isn't always enough





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487826)
Time Spent: 1h  (was: 50m)

> Add debug-level logs in Filesystem#close
> 
>
> Key: HADOOP-17267
> URL: https://issues.apache.org/jira/browse/HADOOP-17267
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Karen Coppage
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> HDFS reuses the same cached FileSystem object across the file system. If the 
> client calls FileSystem.close(), closeAllForUgi(), or closeAll() (if it 
> applies to the instance) anywhere in the system it purges the cache of that 
> FS instance, and trying to use the instance results in an IOException: 
> FileSystem closed.
> It would be a great help to clients to see where and when a given FS instance 
> was closed. I.e. in close(), closeAllForUgi(), or closeAll(), it would be 
> great to see a DEBUG-level log of
>  * calling method name, class, file name/line number
>  * FileSystem object's identity hash (FileSystem.close() only)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=487593&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487593
 ]

ASF GitHub Bot logged work on HADOOP-13327:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:19
Start Date: 22/Sep/20 03:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-650435353


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 31s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  17m 37s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  5s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 35s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 39s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 28s |  hadoop-azure-datalake in trunk failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 39s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 43s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 23s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  22m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 34s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  18m 34s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 57s |  root: The patch generated 2 new 
+ 105 unchanged - 4 fixed = 107 total (was 109)  |
   | +1 :green_heart: |  mvnsite  |   3m 55s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  5s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 31s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 38s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 30s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 27s |  hadoop-azure-datalake in the patch failed 
with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   7m 27s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 30s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  92m  4s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 30s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m  4s |  hadoop-azure-datalake in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 285m 26s |   |
   
   
   | Reason | Tests 

[GitHub] [hadoop] fengnanli commented on pull request #2266: HDFS-15554. RBF: force router check file existence in destinations before adding/updating mount points

2020-09-21 Thread GitBox


fengnanli commented on pull request #2266:
URL: https://github.com/apache/hadoop/pull/2266#issuecomment-696267644


   @ayushtkn Can you help commit the change? Thanks a lot!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-21 Thread GitBox


LeonGao91 commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-696393867


   @Hexiaoqiao Would you please take a second look? I have added a check as we 
discussed with UT.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #1830: HADOOP-11867: Add gather API to file system.

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#issuecomment-696332135


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 18s |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m  1s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m 51s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 43s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 24s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 48s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m 34s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  20m 33s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 59s |  hadoop-common-project: The patch 
generated 32 new + 90 unchanged - 4 fixed = 122 total (was 94)  |
   | +1 :green_heart: |  mvnsite  |   4m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  6s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 37s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 12s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  17m 29s |  hadoop-common-project in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 28s |  benchmark in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 47s |  The patch generated 18 ASF License 
warnings.  |
   |  |   | 237m  0s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.crypto.key.kms.server.TestKMS |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1830/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1830 |
   | JIRA Issue | HADOOP-11867 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 252f9a642739 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7a6265ac425 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1830/9/artifact/out/diff-checkstyle-hadoop-common-project.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1830/9/artifact/out/patch-unit-hadoop-common-project.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1830/9/testReport/ |
   |

[GitHub] [hadoop] steveloughran commented on a change in pull request #2321: HADOOP-17267: Add debug-level logs in Filesystem#close

2020-09-21 Thread GitBox


steveloughran commented on a change in pull request #2321:
URL: https://github.com/apache/hadoop/pull/2321#discussion_r492270963



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -621,6 +621,9 @@ public static LocalFileSystem 
newInstanceLocal(Configuration conf)
* @throws IOException a problem arose closing one or more filesystem.
*/
   public static void closeAll() throws IOException {
+if (LOGGER.isDebugEnabled()) {
+  debugLogFileSystemClose("closeAll", null);

Review comment:
   pass in "" instead of null and theres' no need for the ? : below

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -632,9 +635,21 @@ public static void closeAll() throws IOException {
*/
   public static void closeAllForUGI(UserGroupInformation ugi)
   throws IOException {
+if (LOGGER.isDebugEnabled()) {
+  debugLogFileSystemClose("closeAllForUGI", "UGI: " + ugi.toString());

Review comment:
   just use ugi and let the automatic to.String to the work

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
##
@@ -632,9 +635,21 @@ public static void closeAll() throws IOException {
*/
   public static void closeAllForUGI(UserGroupInformation ugi)
   throws IOException {
+if (LOGGER.isDebugEnabled()) {
+  debugLogFileSystemClose("closeAllForUGI", "UGI: " + ugi.toString());
+}
 CACHE.closeAll(ugi);
   }
 
+  private static void debugLogFileSystemClose(String methodName, String 
additionalInfo) {
+StackTraceElement callingMethod = new 
Throwable().fillInStackTrace().getStackTrace()[2];
+LOGGER.debug(
+"FileSystem." + methodName + "() called by method: "

Review comment:
   Prefer SLF4J {} and the values as varargs.
   You might want to think that at TRACE the entire throwable is logged. Why 
so? one level up isn't always enough





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #2069: HADOOP-16830. IOStatistics API.

2020-09-21 Thread GitBox


steveloughran closed pull request #2069:
URL: https://github.com/apache/hadoop/pull/2069


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2320: MAPREDUCE-7282: remove v2 commit algorithm for correctness reasons.

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2320:
URL: https://github.com/apache/hadoop/pull/2320#issuecomment-696113918







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=487551&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487551
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:15
Start Date: 22/Sep/20 03:15
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324


   Contains
   HADOOP-16830. Add IOStatistics API
   
   
   This is the aggregate branch which also contains #2323; it supercedes #2069 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487551)
Time Spent: 40m  (was: 0.5h)

> S3A statistics to support IOStatistics
> --
>
> Key: HADOOP-17271
> URL: https://issues.apache.org/jira/browse/HADOOP-17271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> S3A to rework statistics with
> * API + Implementation split of the interfaces used by subcomponents when 
> reporting stats
> * S3A Instrumentation to implement all the interfaces
> * streams, etc to all implement IOStatisticsSources and serve to callers
> * Add some tracking of durations of remote requests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2102: HADOOP-13327. Specify Output Stream and Syncable

2020-09-21 Thread GitBox


steveloughran commented on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-696236937


   @joshelser -rebased and addressed your minor points
   
   I think you are right, we don't need separate HSYNC/HFLUSH options, it's 
just that they've been there (separately) for a while.
   
   What to do?
   
   * maintain as is
   * add SYNCABLE which says "both"
   * Declare that HSYNC is sufficient and the only one you should look for. If 
you implement HFLUSH and not HSYNC, you are of no use to applications. retain 
support for HFLUSH probe, but in the spec only mention it as of historical 
interest. 
   
   I like option 3, now I think about it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1830: HADOOP-11867: Add gather API to file system.

2020-09-21 Thread GitBox


steveloughran commented on a change in pull request #1830:
URL: https://github.com/apache/hadoop/pull/1830#discussion_r491936825



##
File path: 
hadoop-common-project/benchmark/src/main/java/org/apache/hadoop/benchmark/AsyncBenchmark.java
##
@@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.benchmark;
+
+import org.apache.hadoop.conf.Configuration;

Review comment:
   review import ordering





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17023) Tune listStatus() api of s3a.

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17023?focusedWorklogId=487505&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487505
 ]

ASF GitHub Bot logged work on HADOOP-17023:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:07
Start Date: 22/Sep/20 03:07
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487505)
Time Spent: 3h 10m  (was: 3h)

> Tune listStatus() api of s3a.
> -
>
> Key: HADOOP-17023
> URL: https://issues.apache.org/jira/browse/HADOOP-17023
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Similar optimisation which was done for listLocatedSttaus() 
> https://issues.apache.org/jira/browse/HADOOP-16465  can done for listStatus() 
> api as well. 
> This is going to reduce the number of remote calls in case of directory 
> listing.
>  
> CC [~ste...@apache.org] [~shwethags]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2323: HADOOP-16830. Add public IOStatistics API.

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-696387784


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 45s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 19s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 45s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 42s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  the patch passed  |
   | -1 :x: |  compile  |  22m  3s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  22m  3s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |  19m 42s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |  19m 42s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 53s |  
hadoop-common-project/hadoop-common: The patch generated 17 new + 142 unchanged 
- 4 fixed = 159 total (was 146)  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 50s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 40s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | -1 :x: |  findbugs  |   2m 50s |  hadoop-common-project/hadoop-common 
generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  5s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 180m 35s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.counters; locked 60% of 
time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 183] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.gauges; locked 60% of time 
 Unsynchronized access at IOStatisticsSnapshot.java:60% of time  Unsynchronized 
access at IOStatisticsSnapshot.java:[line 188] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.maximums; locked 60% of 
time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 198] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.meanStatistics; locked 60% 
of time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 203] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.minimums; locked 60% of 
time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 117] |
   |  |  org.apache.hadoop.fs.statistics.IOS

[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=487541&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487541
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:14
Start Date: 22/Sep/20 03:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-696387784


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
10 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 45s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 32s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 19s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 24s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 45s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 42s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  the patch passed  |
   | -1 :x: |  compile  |  22m  3s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |  22m  3s |  root in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |  19m 42s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |  19m 42s |  root in the patch failed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 53s |  
hadoop-common-project/hadoop-common: The patch generated 17 new + 142 unchanged 
- 4 fixed = 159 total (was 146)  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 50s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 40s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | -1 :x: |  findbugs  |   2m 50s |  hadoop-common-project/hadoop-common 
generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  5s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 180m 35s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.counters; locked 60% of 
time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 183] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.gauges; locked 60% of time 
 Unsynchronized access at IOStatisticsSnapshot.java:60% of time  Unsynchronized 
access at IOStatisticsSnapshot.java:[line 188] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.maximums; locked 60% of 
time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 198] |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.meanStatistics; locked 60% 
of t

[GitHub] [hadoop] steveloughran closed pull request #2091: test PR

2020-09-21 Thread GitBox


steveloughran closed pull request #2091:
URL: https://github.com/apache/hadoop/pull/2091


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17258) MagicS3GuardCommitter fails with `pendingset` already exists

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17258?focusedWorklogId=487500&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487500
 ]

ASF GitHub Bot logged work on HADOOP-17258:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:07
Start Date: 22/Sep/20 03:07
Worklog Time Spent: 10m 
  Work Description: dongjoon-hyun commented on pull request #2315:
URL: https://github.com/apache/hadoop/pull/2315#issuecomment-695837396


   Since it didn't break anything in CI, that's great. Thank you, @sunchao and 
@viirya . I've been looking at the suites still.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487500)
Time Spent: 1h 40m  (was: 1.5h)

> MagicS3GuardCommitter fails with `pendingset` already exists
> 
>
> Key: HADOOP-17258
> URL: https://issues.apache.org/jira/browse/HADOOP-17258
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In `trunk/branch-3.3/branch-3.2`, `MagicS3GuardCommitter.innerCommitTask` has 
> `false` at `pendingSet.save`.
> {code}
> try {
>   pendingSet.save(getDestFS(), taskOutcomePath, false);
> } catch (IOException e) {
>   LOG.warn("Failed to save task commit data to {} ",
>   taskOutcomePath, e);
>   abortPendingUploads(context, pendingSet.getCommits(), true);
>   throw e;
> }
> {code}
> And, it can cause a job failure like the following.
> {code}
> WARN TaskSetManager: Lost task 1562.1 in stage 1.0 (TID 1788, 100.92.11.63, 
> executor 26): org.apache.spark.SparkException: Task failed while writing rows.
> at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
> at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
> at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> at org.apache.spark.scheduler.Task.run(Task.scala:123)
> at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
> at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
> Source)
> at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
> Source)
> at java.base/java.lang.Thread.run(Unknown Source)
> Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: 
> s3a://xxx/__magic/app-attempt-/task_20200911063607_0001_m_001562.pendingset
>  already exists
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:761)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
> at 
> org.apache.hadoop.util.JsonSerialization.save(JsonSerialization.java:269)
> at 
> org.apache.hadoop.fs.s3a.commit.files.PendingSet.save(PendingSet.java:170)
> at 
> org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.innerCommitTask(MagicS3GuardCommitter.java:220)
> at 
> org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter.commitTask(MagicS3GuardCommitter.java:165)
> at 
> org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
> at 
> org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
> at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:244)
> at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
> at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
> at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileForma

[GitHub] [hadoop] steveloughran merged pull request #2257: HADOOP-17023 Tune S3AFileSystem.listStatus() api.

2020-09-21 Thread GitBox


steveloughran merged pull request #2257:
URL: https://github.com/apache/hadoop/pull/2257


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?focusedWorklogId=487493&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487493
 ]

ASF GitHub Bot logged work on HADOOP-17277:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:06
Start Date: 22/Sep/20 03:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#issuecomment-696443154


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   9m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   6m 50s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 41s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no findbugs output file 
(findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no findbugs output file 
(findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   8m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  18m 48s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 8 new + 155 unchanged - 
8 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  16m 49s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 30 new + 133 unchanged - 
30 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  10m 20s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 1 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   7m  5s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 19s |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from findbugs  |
   | +0 :ok: |  findbugs  |   0m 23s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 34s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  99m 34s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   8m  9s |  hadoop-hdfs-native-client in the 
patch passed.  |
   | -1 :x: |  unit  |   8m 33s |  hadoop-hdfs-rbf in the patch passed.  |
   | -1 :x: |  unit  | 145m 36s |  hadoop-yarn in the patch failed.  |
 

[GitHub] [hadoop] dongjoon-hyun commented on pull request #2315: HADOOP-17258. Make MagicS3GuardCommitter overwrite the existing pendingSet file if exists

2020-09-21 Thread GitBox


dongjoon-hyun commented on pull request #2315:
URL: https://github.com/apache/hadoop/pull/2315#issuecomment-695837396


   Since it didn't break anything in CI, that's great. Thank you, @sunchao and 
@viirya . I've been looking at the suites still.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2322: HADOOP-17277. Correct spelling errors for separator

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#issuecomment-696443154


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   9m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   6m 50s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 41s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no findbugs output file 
(findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no findbugs output file 
(findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   8m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  18m 48s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 8 new + 155 unchanged - 
8 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  16m 49s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 30 new + 133 unchanged - 
30 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  10m 20s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 1 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   7m  5s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 19s |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from findbugs  |
   | +0 :ok: |  findbugs  |   0m 23s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 34s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  99m 34s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   8m  9s |  hadoop-hdfs-native-client in the 
patch passed.  |
   | -1 :x: |  unit  |   8m 33s |  hadoop-hdfs-rbf in the patch passed.  |
   | -1 :x: |  unit  | 145m 36s |  hadoop-yarn in the patch failed.  |
   | -1 :x: |  unit  |   0m 37s |  hadoop-yarn-site in the patch failed.  |
   | -1 :x: |  unit  |   0m 38s |  hadoop-mapreduce-client-nativetask in the 
patch failed.  |
   | +0 :ok: |  asflicense  |   0m 37s |  ASF License check generated no 
output?  |
   |  |   | 537m 13s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | had

[jira] [Work logged] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?focusedWorklogId=487479&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487479
 ]

ASF GitHub Bot logged work on HADOOP-17277:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:05
Start Date: 22/Sep/20 03:05
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2322:
URL: https://github.com/apache/hadoop/pull/2322#discussion_r492269381



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/NativeTask.h
##
@@ -59,8 +59,8 @@ enum Endium {
 #define NATIVE_HADOOP_VERSION "native.hadoop.version"
 
 #define NATIVE_INPUT_SPLIT "native.input.split"
-#define INPUT_LINE_KV_SEPERATOR 
"mapreduce.input.keyvaluelinerecordreader.key.value.separator"

Review comment:
   leave these as they were before. Yes, its wrong, but changing it will 
break everything which compiles against it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487479)
Time Spent: 50m  (was: 40m)

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HADOOP-17277.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17267) Add debug-level logs in Filesystem#close

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17267?focusedWorklogId=487470&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487470
 ]

ASF GitHub Bot logged work on HADOOP-17267:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:04
Start Date: 22/Sep/20 03:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2321:
URL: https://github.com/apache/hadoop/pull/2321#issuecomment-696247542


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 36s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  21m  2s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 29s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 23s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  20m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 50s |  
hadoop-common-project/hadoop-common: The patch generated 4 new + 76 unchanged - 
0 fixed = 80 total (was 76)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 39s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  0s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 187m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2321/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2321 |
   | JIRA Issue | HADOOP-17267 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3ed3d0909910 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7a6265ac425 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2321/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2321/1/testReport/ |
   | Max. process+thread count | 1366 (vs. ulimit of 5500

[GitHub] [hadoop] steveloughran commented on a change in pull request #2322: HADOOP-17277. Correct spelling errors for separator

2020-09-21 Thread GitBox


steveloughran commented on a change in pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#discussion_r492269381



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/NativeTask.h
##
@@ -59,8 +59,8 @@ enum Endium {
 #define NATIVE_HADOOP_VERSION "native.hadoop.version"
 
 #define NATIVE_INPUT_SPLIT "native.input.split"
-#define INPUT_LINE_KV_SEPERATOR 
"mapreduce.input.keyvaluelinerecordreader.key.value.separator"

Review comment:
   leave these as they were before. Yes, its wrong, but changing it will 
break everything which compiles against it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=487459&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487459
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:03
Start Date: 22/Sep/20 03:03
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-696292801


   hadoop common part of #2069 : API and impl,
   
   While this is isolated for review, know that I'll be doing my changes on 
#2324 and cherry-picking here. It's just isolated for ease of review/to scare 
people less.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487459)
Time Spent: 2h 20m  (was: 2h 10m)

> Add public IOStatistics API
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] YaYun-Wang commented on a change in pull request #2189: HDFS-15025. Applying NVDIMM storage media to HDFS

2020-09-21 Thread GitBox


YaYun-Wang commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r491852471



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockStatsMXBean.java
##
@@ -145,9 +150,11 @@ public void testStorageTypeStatsJMX() throws Exception {
   Map storageTypeStats = 
(Map)entry.get("value");
   typesPresent.add(storageType);
   if (storageType.equals("ARCHIVE") || storageType.equals("DISK") ) {
-assertEquals(3l, storageTypeStats.get("nodesInService"));
+assertEquals(3L, storageTypeStats.get("nodesInService"));

Review comment:
   `storageType` is a parameter of "java.lang.String" , and `switch()`  
does not support "java.lang.String" before java 1.7. So, will `if-else ` be 
more appropriate here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2321: HADOOP-17267: Add debug-level logs in Filesystem#close

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2321:
URL: https://github.com/apache/hadoop/pull/2321#issuecomment-696247542


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 36s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  21m  2s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 31s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 29s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 31s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  23m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 23s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  20m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 50s |  
hadoop-common-project/hadoop-common: The patch generated 4 new + 76 unchanged - 
0 fixed = 80 total (was 76)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 39s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  0s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 187m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2321/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2321 |
   | JIRA Issue | HADOOP-17267 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3ed3d0909910 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7a6265ac425 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2321/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2321/1/testReport/ |
   | Max. process+thread count | 1366 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2321/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-

[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=487464&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487464
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:03
Start Date: 22/Sep/20 03:03
Worklog Time Spent: 10m 
  Work Description: dbtsai commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-696356461


   @steveloughran we addressed your comments. Please take a look again when you 
have time. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487464)
Time Spent: 13h 10m  (was: 13h)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 13h 10m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2274: HDFS-15557. Log the reason why a storage log file can't be deleted

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2274:
URL: https://github.com/apache/hadoop/pull/2274#issuecomment-695921398


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  31m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  16m 16s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 17s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 26s |  hadoop-hdfs in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  trunk passed  |
   | -1 :x: |  mvnsite  |   0m 25s |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   1m 49s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-hdfs in trunk failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +0 :ok: |  spotbugs  |   3m  8s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 25s |  hadoop-hdfs in trunk failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 22s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javac  |   0m 22s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  compile  |   0m 21s |  hadoop-hdfs in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  javac  |   0m 21s |  hadoop-hdfs in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 20s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 23s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |   0m 21s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-hdfs in the patch failed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | -1 :x: |  findbugs  |   0m 22s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 21s |  hadoop-hdfs in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 22s |  ASF License check generated no 
output?  |
   |  |   |  59m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2274/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2274 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 62e70170050b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7a6265ac425 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2274/4/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2274/4/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2274/4/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2274/4/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdf

[GitHub] [hadoop] steveloughran commented on pull request #2323: HADOOP-16830. Add public IOStatistics API.

2020-09-21 Thread GitBox


steveloughran commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-696292801


   hadoop common part of #2069 : API and impl,
   
   While this is isolated for review, know that I'll be doing my changes on 
#2324 and cherry-picking here. It's just isolated for ease of review/to scare 
people less.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-09-21 Thread GitBox


dbtsai commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-696356461


   @steveloughran we addressed your comments. Please take a look again when you 
have time. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=487452&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487452
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:02
Start Date: 22/Sep/20 03:02
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323


   
   Hadoop-common side of the patch
   
   * API definition
   * interface
   * tests
   * implementation
   * wiring up of the IO core classes



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487452)
Time Spent: 2h 10m  (was: 2h)

> Add public IOStatistics API
> ---
>
> Key: HADOOP-16830
> URL: https://issues.apache.org/jira/browse/HADOOP-16830
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Applications like to collect the statistics which specific operations take, 
> by collecting exactly those operations done during the execution of FS API 
> calls by their individual worker threads, and returning these to their job 
> driver
> * S3A has a statistics API for some streams, but it's a non-standard one; 
> Impala &c can't use it
> * FileSystem storage statistics are public, but as they aren't cross-thread, 
> they don't aggregate properly
> Proposed
> # A new IOStatistics interface to serve up statistics
> # S3A to implement
> # other stores to follow
> # Pass-through from the usual wrapper classes (FS data input/output streams)
> It's hard to think about how best to offer an API for operation context 
> stats, and how to actually implement.
> ThreadLocal isn't enough because the helper threads need to update on the 
> thread local value of the instigator
> My Initial PoC doesn't address that issue, but it shows what I'm thinking of



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2288: HDFS-15548. Allow configuring DISK/ARCHIVE storage types on same device mount

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2288:
URL: https://github.com/apache/hadoop/pull/2288#issuecomment-695952357


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m  4s |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 54s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m 33s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 48s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 39s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   4m 17s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 15s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |   1m 39s |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
with JDK Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 602 
unchanged - 0 fixed = 604 total (was 602)  |
   | +1 :green_heart: |  compile  |   1m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |   1m 26s |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 2 new 
+ 586 unchanged - 0 fixed = 588 total (was 586)  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 18s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 145m 33s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 251m 57s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
   |   | hadoop.hdfs.server.datanode.TestBlockRecovery |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.namenode.ha.TestUpdateBlockTailing |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
   |   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2288/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2288 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux af4a1e33ff1c 4.15.0-112-generic #113-Ubuntu SMP

[GitHub] [hadoop] steveloughran commented on pull request #2097: HADOOP-17088. Failed to load Xinclude files with relative path in cas…

2020-09-21 Thread GitBox


steveloughran commented on pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097#issuecomment-696281627


   +1, merged to trunk, and I will pull back into branch-3.3, for the 3.3.1 
release.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17088) Failed to load XInclude files with relative path.

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17088?focusedWorklogId=487431&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487431
 ]

ASF GitHub Bot logged work on HADOOP-17088:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:00
Start Date: 22/Sep/20 03:00
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2097:
URL: https://github.com/apache/hadoop/pull/2097#issuecomment-696281627


   +1, merged to trunk, and I will pull back into branch-3.3, for the 3.3.1 
release.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487431)
Time Spent: 1h 20m  (was: 1h 10m)

> Failed to load XInclude files with relative path.
> -
>
> Key: HADOOP-17088
> URL: https://issues.apache.org/jira/browse/HADOOP-17088
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.1
>Reporter: Yushi Hayasaka
>Assignee: Yushi Hayasaka
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When we create a configuration file, which load a external XML file with 
> relative path, and try to load it via calling `Configuration.addResource` 
> with `Path(URI)`, we got an error, which failed to load a external XML, after 
> https://issues.apache.org/jira/browse/HADOOP-14216 is merged.
> {noformat}
> Exception in thread "main" java.lang.RuntimeException: java.io.IOException: 
> Fetch fail on include for 'mountTable.xml' with no fallback while loading 
> 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3021)
>   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973)
>   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
>   at 
> org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2896)
>   at com.company.test.Main.main(Main.java:29)
> Caused by: java.io.IOException: Fetch fail on include for 'mountTable.xml' 
> with no fallback while loading 'file:/opt/hadoop/etc/hadoop/core-site.xml'
>   at 
> org.apache.hadoop.conf.Configuration$Parser.handleEndElement(Configuration.java:3271)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3331)
>   at 
> org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114)
>   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007)
>   ... 4 more
> {noformat}
> The cause is that the URI is passed as string to java.io.File constructor and 
> File does not support the file URI, so my suggestion is trying to convert 
> from string to URI at first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17259) Allow SSLFactory fallback to input config if ssl-*.xml fail to load from classpath

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17259?focusedWorklogId=487425&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487425
 ]

ASF GitHub Bot logged work on HADOOP-17259:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 03:00
Start Date: 22/Sep/20 03:00
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2301:
URL: https://github.com/apache/hadoop/pull/2301#issuecomment-696304504


   LGTM: +1 .. commit at your leisure
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487425)
Time Spent: 1h 50m  (was: 1h 40m)

> Allow SSLFactory fallback to input config if ssl-*.xml fail to load from 
> classpath
> --
>
> Key: HADOOP-17259
> URL: https://issues.apache.org/jira/browse/HADOOP-17259
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.5
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Some applications like Tez does not have ssl-client.xml and ssl-server.xml in 
> classpath. Instead, it directly pass the parsed SSL configuration as the 
> input configuration object. This ticket is opened to allow this case. 
> TEZ-4096 attempts to solve this issue but but take a different approach which 
> may not work in existing Hadoop clients that use SSLFactory from 
> hadoop-common. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2301: HADOOP-17259. Allow SSLFactory fallback to input config if ssl-*.xml …

2020-09-21 Thread GitBox


steveloughran commented on pull request #2301:
URL: https://github.com/apache/hadoop/pull/2301#issuecomment-696304504


   LGTM: +1 .. commit at your leisure
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?focusedWorklogId=487383&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487383
 ]

ASF GitHub Bot logged work on HADOOP-17277:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 01:16
Start Date: 22/Sep/20 01:16
Worklog Time Spent: 10m 
  Work Description: ferhui commented on a change in pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#discussion_r492428800



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/NativeTask.h
##
@@ -59,8 +59,8 @@ enum Endium {
 #define NATIVE_HADOOP_VERSION "native.hadoop.version"
 
 #define NATIVE_INPUT_SPLIT "native.input.split"
-#define INPUT_LINE_KV_SEPERATOR 
"mapreduce.input.keyvaluelinerecordreader.key.value.separator"

Review comment:
   Thanks! Leave these, Have one more commit to fix it!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 487383)
Time Spent: 40m  (was: 0.5h)

> Correct spelling errors for separator
> -
>
> Key: HADOOP-17277
> URL: https://issues.apache.org/jira/browse/HADOOP-17277
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HADOOP-17277.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Many spelling errors for separator, correct them!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on a change in pull request #2322: HADOOP-17277. Correct spelling errors for separator

2020-09-21 Thread GitBox


ferhui commented on a change in pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#discussion_r492428800



##
File path: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/NativeTask.h
##
@@ -59,8 +59,8 @@ enum Endium {
 #define NATIVE_HADOOP_VERSION "native.hadoop.version"
 
 #define NATIVE_INPUT_SPLIT "native.input.split"
-#define INPUT_LINE_KV_SEPERATOR 
"mapreduce.input.keyvaluelinerecordreader.key.value.separator"

Review comment:
   Thanks! Leave these, Have one more commit to fix it!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17277) Correct spelling errors for separator

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17277?focusedWorklogId=487341&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487341
 ]

ASF GitHub Bot logged work on HADOOP-17277:
---

Author: ASF GitHub Bot
Created on: 22/Sep/20 00:02
Start Date: 22/Sep/20 00:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#issuecomment-696443154


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   9m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   6m 50s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 41s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no findbugs output file 
(findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no findbugs output file 
(findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   8m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  18m 48s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 8 new + 155 unchanged - 
8 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  16m 49s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 30 new + 133 unchanged - 
30 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  10m 20s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 1 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   7m  5s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 19s |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from findbugs  |
   | +0 :ok: |  findbugs  |   0m 23s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 34s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  99m 34s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   8m  9s |  hadoop-hdfs-native-client in the 
patch passed.  |
   | -1 :x: |  unit  |   8m 33s |  hadoop-hdfs-rbf in the patch passed.  |
   | -1 :x: |  unit  | 145m 36s |  hadoop-yarn in the patch failed.  |
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2322: HADOOP-17277. Correct spelling errors for separator

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2322:
URL: https://github.com/apache/hadoop/pull/2322#issuecomment-696443154


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 15s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m 49s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   9m 25s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 27s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   6m 50s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 41s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no findbugs output file 
(findbugsXml.xml)  |
   | +0 :ok: |  findbugs  |   0m 21s |  
branch/hadoop-hdfs-project/hadoop-hdfs-native-client no findbugs output file 
(findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   8m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 48s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  18m 48s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 8 new + 155 unchanged - 
8 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |  18m 48s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  16m 49s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 30 new + 133 unchanged - 
30 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 49s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  10m 20s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch 1 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   7m  5s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 19s |  
hadoop-hdfs-project/hadoop-hdfs-native-client has no data from findbugs  |
   | +0 :ok: |  findbugs  |   0m 23s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 34s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  99m 34s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   8m  9s |  hadoop-hdfs-native-client in the 
patch passed.  |
   | -1 :x: |  unit  |   8m 33s |  hadoop-hdfs-rbf in the patch passed.  |
   | -1 :x: |  unit  | 145m 36s |  hadoop-yarn in the patch failed.  |
   | -1 :x: |  unit  |   0m 37s |  hadoop-yarn-site in the patch failed.  |
   | -1 :x: |  unit  |   0m 38s |  hadoop-mapreduce-client-nativetask in the 
patch failed.  |
   | +0 :ok: |  asflicense  |   0m 37s |  ASF License check generated no 
output?  |
   |  |   | 537m 13s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | had

[jira] [Commented] (HADOOP-17216) delta.io spark task commit encountering S3 cached 404/FileNotFoundException

2020-09-21 Thread Cheng Wei (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17199706#comment-17199706
 ] 

Cheng Wei commented on HADOOP-17216:


Thanks [~ste...@apache.org]. I filed a ticket in delta and will follow up the 
issue there. But I will leave a record here in case anyone has the similar 
issue.
[https://github.com/delta-io/delta/issues/523]

> delta.io spark task commit encountering S3 cached 404/FileNotFoundException
> ---
>
> Key: HADOOP-17216
> URL: https://issues.apache.org/jira/browse/HADOOP-17216
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.2
> Environment: hadoop = "3.1.2"
> hadoop-aws = "3.1.2"
> spark = "2.4.5"
> spark-on-k8s-operator = "v1beta2-1.1.2-2.4.5"
> deployed into AWS EKS kubernates. Version information below:
> Server Version: version.Info\{Major:"1", Minor:"16+", 
> GitVersion:"v1.16.8-eks-e16311", 
> GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", 
> BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", 
> Platform:"linux/amd64"}
>Reporter: Cheng Wei
>Priority: Major
> Fix For: 3.3.0
>
>
> Hi,
> When using spark streaming with deltalake, I got the following exception 
> occasionally, something like 1 out of 100. Thanks.
> {code:java}
> Caused by: java.io.FileNotFoundException: No such file or directory: 
> s3a://[pathToFolder]/date=2020-07-29/part-5-046af631-7198-422c-8cc8-8d3adfb4413e.c000.snappy.parquet
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2255)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2149)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088)
>  at 
> org.apache.spark.sql.delta.files.DelayedCommitProtocol$$anonfun$8.apply(DelayedCommitProtocol.scala:141)
>  at 
> org.apache.spark.sql.delta.files.DelayedCommitProtocol$$anonfun$8.apply(DelayedCommitProtocol.scala:139)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>  at 
> org.apache.spark.sql.delta.files.DelayedCommitProtocol.commitTask(DelayedCommitProtocol.scala:139)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242){code}
>  
> -Environment
> hadoop = "3.1.2"
>  hadoop-aws = "3.1.2"
> spark = "2.4.5"
> spark-on-k8s-operator = "v1beta2-1.1.2-2.4.5"
>  
> deployed into AWS EKS kubernates. Version information below:
> Server Version: version.Info\{Major:"1", Minor:"16+", 
> GitVersion:"v1.16.8-eks-e16311", 
> GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", 
> BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", 
> Platform:"linux/amd64"}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=487319&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487319
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 21/Sep/20 22:36
Start Date: 21/Sep/20 22:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-696415310


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  2s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
39 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   4m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 49s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  30m 51s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  30m 51s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2058 unchanged - 
1 fixed = 2058 total (was 2059)  |
   | +1 :green_heart: |  compile  |  25m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  25m 33s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1952 unchanged - 
1 fixed = 1952 total (was 1953)  |
   | -0 :warning: |  checkstyle  |   3m 46s |  root: The patch generated 25 new 
+ 267 unchanged - 25 fixed = 292 total (was 292)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 58s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 47s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   3m 46s |  hadoop-common-project/hadoop-common 
generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 42s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   7m 23s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 48s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  The patch does not generate 
ASF License warnings.  |
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-696415310


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  2s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
39 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 42s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 33s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  20m  9s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   4m  5s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 23s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m 49s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 42s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |  30m 51s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  30m 51s |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2058 unchanged - 
1 fixed = 2058 total (was 2059)  |
   | +1 :green_heart: |  compile  |  25m 33s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  25m 33s |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1952 unchanged - 
1 fixed = 1952 total (was 1953)  |
   | -0 :warning: |  checkstyle  |   3m 46s |  root: The patch generated 25 new 
+ 267 unchanged - 25 fixed = 292 total (was 292)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javadoc  |   1m 58s |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01
 with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  hadoop-mapreduce-client-core in 
the patch passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01. 
 |
   | +1 :green_heart: |  javadoc  |   0m 47s |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | -1 :x: |  findbugs  |   3m 46s |  hadoop-common-project/hadoop-common 
generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 42s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   7m 23s |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 48s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 245m 34s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.fs.statistics.IOStatisticsSnapshot.counters; locked 60% of 
time  Unsynchronized access at IOStatisticsSnapshot.java:60% of time  
Unsynchronized access at IOStatisticsSnapshot.java:[line 183] |
   |  |  Inconsistent synchronization of

[GitHub] [hadoop] hadoop-yetus commented on pull request #2102: HADOOP-13327. Specify Output Stream and Syncable

2020-09-21 Thread GitBox


hadoop-yetus commented on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-696402075


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  29m  9s |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 14s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  19m 29s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  5s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 43s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  1s |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 10s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  21m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 10s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 10s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 47s |  root: The patch generated 3 new 
+ 105 unchanged - 4 fixed = 108 total (was 109)  |
   | +1 :green_heart: |  mvnsite  |   4m 57s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  6s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 21s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   8m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m  1s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 101m  4s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 58s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 19s |  hadoop-azure-datalake in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 316m 36s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.contract.rawlocal.TestRawlocalContractCreate |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestSnapshotCommands |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2102/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2102 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint xml |
   | uname | Linux 0d2edf1efca2 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 83c7c2b4c48 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-

[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification

2020-09-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=487303&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-487303
 ]

ASF GitHub Bot logged work on HADOOP-13327:
---

Author: ASF GitHub Bot
Created on: 21/Sep/20 22:02
Start Date: 21/Sep/20 22:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2102:
URL: https://github.com/apache/hadoop/pull/2102#issuecomment-696401829


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  28m 20s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
8 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 37s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 52s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 29s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  7s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m  3s |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 26s |  trunk passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 18s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 54s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  18m 54s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 52s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  16m 52s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 41s |  root: The patch generated 3 new 
+ 105 unchanged - 4 fixed = 108 total (was 109)  |
   | +1 :green_heart: |  mvnsite  |   4m 29s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 3 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  5s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  8s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m  0s |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 26s |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   7m 56s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 25s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  96m  2s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 47s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-azure-datalake in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 322m 30s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.contract.rawlocal.TestRawlocalContractCreate |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |---

  1   2   3   >