[GitHub] [hadoop] hadoop-yetus commented on pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
hadoop-yetus commented on pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#issuecomment-809912756 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 9s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 34s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 25s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/3/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 7m 30s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/3/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 79m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.mapred.TestJobEndNotifier | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2775 | | JIRA Issue | MAPREDUCE-7329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ac69e3c3a4b2 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c4afa6873a5d9ff8927228ff5d1bb5c0b7692581 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[GitHub] [hadoop] hadoop-yetus commented on pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
hadoop-yetus commented on pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#issuecomment-809906960 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 26s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/2/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 23s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 7m 15s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/2/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 80m 27s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.mapred.TestJobEndNotifier | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2775/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2775 | | JIRA Issue | MAPREDUCE-7329 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 2bf8a2b04881 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5dcb6055edccc1f12c2531002b7b66a611d9761f | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[jira] [Work started] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-11245 started by Wei-Chiu Chuang. > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=573867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573867 ] ASF GitHub Bot logged work on HADOOP-17608: --- Author: ASF GitHub Bot Created on: 30/Mar/21 03:49 Start Date: 30/Mar/21 03:49 Worklog Time Spent: 10m Work Description: aajisaka commented on a change in pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#discussion_r603760177 ## File path: hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java ## @@ -550,7 +550,7 @@ public Void call() throws Exception { threadGroup.enumerate(threads); Review comment: Thanks @xiaoyuyao for your suggestion. Updated to use `ThreadUtils.findThreadsByName`. > `Assert.assertEquals(1, result.size());` Actually, the size is 2. I commented why it is. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573867) Time Spent: 40m (was: 0.5h) > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #2828: HADOOP-17608. Fix NPE in TestKMS
aajisaka commented on a change in pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#discussion_r603760177 ## File path: hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java ## @@ -550,7 +550,7 @@ public Void call() throws Exception { threadGroup.enumerate(threads); Review comment: Thanks @xiaoyuyao for your suggestion. Updated to use `ThreadUtils.findThreadsByName`. > `Assert.assertEquals(1, result.size());` Actually, the size is 2. I commented why it is. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17222) Create socket address leveraging URI cache
[ https://issues.apache.org/jira/browse/HADOOP-17222?focusedWorklogId=573865=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573865 ] ASF GitHub Bot logged work on HADOOP-17222: --- Author: ASF GitHub Bot Created on: 30/Mar/21 03:41 Start Date: 30/Mar/21 03:41 Worklog Time Spent: 10m Work Description: 1996fanrui commented on a change in pull request #2817: URL: https://github.com/apache/hadoop/pull/2817#discussion_r603757941 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java ## @@ -39,13 +39,16 @@ import java.nio.channels.SocketChannel; import java.nio.channels.UnresolvedAddressException; import java.util.Map.Entry; +import java.util.concurrent.TimeUnit; import java.util.regex.Pattern; import java.util.*; import java.util.concurrent.ConcurrentHashMap; import javax.net.SocketFactory; import org.apache.hadoop.security.AccessControlException; +import com.google.common.cache.Cache; +import com.google.common.cache.CacheBuilder; Review comment: thanks for your address. LGTM. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573865) Time Spent: 5.5h (was: 5h 20m) > Create socket address leveraging URI cache > --- > > Key: HADOOP-17222 > URL: https://issues.apache.org/jira/browse/HADOOP-17222 > Project: Hadoop Common > Issue Type: Improvement > Components: common, hdfs-client > Environment: HBase version: 2.1.0 > JVM: -Xmx2g -Xms2g > hadoop hdfs version: 2.7.4 > disk:SSD > OS:CentOS Linux release 7.4.1708 (Core) > JMH Benchmark: @Fork(value = 1) > @Warmup(iterations = 300) > @Measurement(iterations = 300) >Reporter: fanrui >Assignee: fanrui >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: After Optimization remark.png, After optimization.svg, > Before Optimization remark.png, Before optimization.svg > > Time Spent: 5.5h > Remaining Estimate: 0h > > Note:Not only the hdfs client can get the current benefit, all callers of > NetUtils.createSocketAddr will get the benefit. Just use hdfs client as an > example. > > Hdfs client selects best DN for hdfs Block. method call stack: > DFSInputStream.chooseDataNode -> getBestNodeDNAddrPair -> > NetUtils.createSocketAddr > NetUtils.createSocketAddr creates the corresponding InetSocketAddress based > on the host and port. There are some heavier operations in the > NetUtils.createSocketAddr method, for example: URI.create(target), so > NetUtils.createSocketAddr takes more time to execute. > The following is my performance report. The report is based on HBase calling > hdfs. HBase is a high-frequency access client for hdfs, because HBase read > operations often access a small DataBlock (about 64k) instead of the entire > HFile. In the case of high frequency access, the NetUtils.createSocketAddr > method is time-consuming. > h3. Test Environment: > > {code:java} > HBase version: 2.1.0 > JVM: -Xmx2g -Xms2g > hadoop hdfs version: 2.7.4 > disk:SSD > OS:CentOS Linux release 7.4.1708 (Core) > JMH Benchmark: @Fork(value = 1) > @Warmup(iterations = 300) > @Measurement(iterations = 300) > {code} > h4. Before Optimization FlameGraph: > In the figure, we can see that DFSInputStream.getBestNodeDNAddrPair accounts > for 4.86% of the entire CPU, and the creation of URIs accounts for a larger > proportion. > !Before Optimization remark.png! > h3. Optimization ideas: > NetUtils.createSocketAddr creates InetSocketAddress based on host and port. > Here we can add Cache to InetSocketAddress. The key of Cache is host and > port, and the value is InetSocketAddress. > h4. After Optimization FlameGraph: > In the figure, we can see that DFSInputStream.getBestNodeDNAddrPair accounts > for 0.54% of the entire CPU. Here, ConcurrentHashMap is used as the Cache, > and the ConcurrentHashMap.get() method gets data from the Cache. The CPU > usage of DFSInputStream.getBestNodeDNAddrPair has been optimized from 4.86% > to 0.54%. > !After Optimization remark.png! > h3. Original FlameGraph link: > [Before > Optimization|https://drive.google.com/file/d/133L5m75u2tu_KgKfGHZLEUzGR0XAfUl6/view?usp=sharing] > [After Optimization > FlameGraph|https://drive.google.com/file/d/133L5m75u2tu_KgKfGHZLEUzGR0XAfUl6/view?usp=sharing] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe,
[GitHub] [hadoop] 1996fanrui commented on a change in pull request #2817: HADOOP-17222. Create socket address leveraging URI cache
1996fanrui commented on a change in pull request #2817: URL: https://github.com/apache/hadoop/pull/2817#discussion_r603757941 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java ## @@ -39,13 +39,16 @@ import java.nio.channels.SocketChannel; import java.nio.channels.UnresolvedAddressException; import java.util.Map.Entry; +import java.util.concurrent.TimeUnit; import java.util.regex.Pattern; import java.util.*; import java.util.concurrent.ConcurrentHashMap; import javax.net.SocketFactory; import org.apache.hadoop.security.AccessControlException; +import com.google.common.cache.Cache; +import com.google.common.cache.CacheBuilder; Review comment: thanks for your address. LGTM. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lichaojacobs commented on a change in pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
lichaojacobs commented on a change in pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#discussion_r603756448 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java ## @@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data) return SecureShuffleUtils.hashFromString(data, key); } + private class PingSocketCleaner extends Thread { +PingSocketCleaner(String name) { + super(name); +} + +@Override +public void run() { + LOG.info("PingSocketCleaner started..."); + while (true) { +Socket clientSocket = null; +try { + clientSocket = serverSocket.accept(); + if (LOG.isDebugEnabled()) { Review comment: ok , i am just used to using `isDebugEnabled `, and in this case, it's really not nesscessary -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lichaojacobs commented on a change in pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
lichaojacobs commented on a change in pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#discussion_r603754126 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java ## @@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data) return SecureShuffleUtils.hashFromString(data, key); } + private class PingSocketCleaner extends Thread { +PingSocketCleaner(String name) { + super(name); +} + +@Override +public void run() { + LOG.info("PingSocketCleaner started..."); + while (true) { +Socket clientSocket = null; +try { + clientSocket = serverSocket.accept(); + if (LOG.isDebugEnabled()) { +LOG.debug("Got one client socket..."); + } + int readData = clientSocket.getInputStream().read(); Review comment: and i think we can do like this: ``` int readData = 0; while (readData != -1) { readData = clientSocket.getInputStream().read(); } LOG.debug("close socket cause client has closed."); clientSocket.close(); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lichaojacobs commented on a change in pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
lichaojacobs commented on a change in pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#discussion_r603752872 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java ## @@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data) return SecureShuffleUtils.hashFromString(data, key); } + private class PingSocketCleaner extends Thread { +PingSocketCleaner(String name) { + super(name); +} + +@Override +public void run() { + LOG.info("PingSocketCleaner started..."); + while (true) { +Socket clientSocket = null; +try { + clientSocket = serverSocket.accept(); + if (LOG.isDebugEnabled()) { +LOG.debug("Got one client socket..."); + } + int readData = clientSocket.getInputStream().read(); Review comment: this is just for ping socket, since ping client won't send data but just make sure server can connect. we can do this because we know client behavior. And yes, if other processes just connect server and send byte on purpose, the cleaner won't work well. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16452) Increase ipc.maximum.data.length default from 64MB to 128MB
[ https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311087#comment-17311087 ] Wei-Chiu Chuang commented on HADOOP-16452: -- For future reference: the Hadoop IPC sub component is used by other projects (Ratis, Tez) where they have different message size characteristics. When we bumped the default size it was meant for HDFS, but it may make sense for other projects to adopt a larger default message size too. > Increase ipc.maximum.data.length default from 64MB to 128MB > --- > > Key: HADOOP-16452 > URL: https://issues.apache.org/jira/browse/HADOOP-16452 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Affects Versions: 2.6.0 >Reporter: Wei-Chiu Chuang >Assignee: Siyao Meng >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-16452.001.patch, HADOOP-16452.002.patch > > > Reason for bumping the default: > Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 > million blocks these days. > With such a high number of blocks, the block report message can exceed the > 64mb limit (defined by ipc.maximum.data.length). The block reports are > rejected, causing missing blocks in HDFS. We had to double this configuration > value in order to work around the issue. > We are seeing an increasing number of these cases. I think it's time to > revisit some of these default values as the hardware evolves. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
hadoop-yetus commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-809865126 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 55s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 18s | | trunk passed | | +1 :green_heart: | compile | 5m 8s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 47s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 58s | | trunk passed | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 55s | | the patch passed | | +1 :green_heart: | compile | 7m 57s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 7m 57s | | the patch passed | | +1 :green_heart: | compile | 6m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 6m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 12s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/2/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 4 new + 225 unchanged - 9 fixed = 229 total (was 234) | | +1 :green_heart: | mvnsite | 1m 46s | | the patch passed | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 8s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 36s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 45s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 365m 45s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 25m 36s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 518m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.TestViewDistributedFileSystem | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | | hadoop.hdfs.server.datanode.TestIncrementalBrVariations | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.TestReconstructStripedFileWithValidator | | |
[jira] [Comment Edited] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311073#comment-17311073 ] lixianwei edited comment on HADOOP-17593 at 3/30/21, 1:56 AM: -- [~brahmareddy] thank you very much, I have uploaded the patch。 was (Author: rigenyi): [~brahmareddy] thank you very much, I have uploaded the package。 > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1, 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System
[ https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311075#comment-17311075 ] lixianwei commented on HADOOP-16492: [~ste...@apache.org] [~brahmareddy] Thanks for your comments. I have excluded log4j , and uploaded a new patch in HADOOP-17593 > Support HuaweiCloud Object Storage as a Hadoop Backend File System > -- > > Key: HADOOP-16492 > URL: https://issues.apache.org/jira/browse/HADOOP-16492 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 3.4.0 >Reporter: zhongjun >Assignee: zhongjun >Priority: Major > Fix For: 3.4.0 > > Attachments: Difference Between OBSA and S3A.pdf, > HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, > HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, > HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, > HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, > HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, > HADOOP-16492.016.patch, HADOOP-16492.017.patch, OBSA HuaweiCloud OBS Adapter > for Hadoop Support.pdf, image-2020-11-21-18-51-51-981.png > > > Added support for HuaweiCloud OBS > ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, > just like what we do before for S3, ADLS, OSS, etc. With simple > configuration, Hadoop applications can read/write data from OBS without any > code change. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311073#comment-17311073 ] lixianwei commented on HADOOP-17593: [~brahmareddy] thank you very much, I have uploaded the package。 > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1, 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lixianwei updated HADOOP-17593: --- Attachment: HADOOP-17593.001.patch > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1, 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > Attachments: HADOOP-17593.001.patch > > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10335) An ip whitelist based implementation to resolve Sasl properties per connection
[ https://issues.apache.org/jira/browse/HADOOP-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HADOOP-10335: - Summary: An ip whitelist based implementation to resolve Sasl properties per connection (was: An ip whilelist based implementation to resolve Sasl properties per connection) > An ip whitelist based implementation to resolve Sasl properties per connection > -- > > Key: HADOOP-10335 > URL: https://issues.apache.org/jira/browse/HADOOP-10335 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Benoy Antony >Assignee: Benoy Antony >Priority: Major > Fix For: 2.6.0 > > Attachments: HADOOP-10335.patch, HADOOP-10335.patch, > HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.pdf > > > As noted in HADOOP-10221, it is sometimes required for a Hadoop Server to > communicate with some client over encrypted channel and with some other > clients over unencrypted channel. > Hadoop-10221 introduced an interface _SaslPropertiesResolver_ and the > changes required to plugin and use _SaslPropertiesResolver_ to identify the > SaslProperties to be used for a connection. > In this jira, an ip-whitelist based implementation of > _SaslPropertiesResolver_ is attempted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7
[ https://issues.apache.org/jira/browse/HADOOP-17601?focusedWorklogId=573798=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573798 ] ASF GitHub Bot logged work on HADOOP-17601: --- Author: ASF GitHub Bot Created on: 29/Mar/21 23:10 Start Date: 29/Mar/21 23:10 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2835: URL: https://github.com/apache/hadoop/pull/2835#issuecomment-809784691 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 10m 57s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 15m 24s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 21s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 0m 18s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | mvnsite | 0m 22s | branch-2.10 passed | | +1 :green_heart: | javadoc | 0m 23s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 18s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 13s | the patch passed | | +1 :green_heart: | compile | 0m 14s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 0m 14s | the patch passed | | +1 :green_heart: | compile | 0m 12s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | javac | 0m 12s | the patch passed | | +1 :green_heart: | mvnsite | 0m 14s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 14s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 13s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 12s | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 33m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2835/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2835 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 690657b0e940 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / 616256b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2835/1/testReport/ | | Max. process+thread count | 93 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2835/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573798) Time Spent: 20m (was: 10m) > Upgrade Jackson databind in branch-2.10 to 2.9.10.7 > --- > > Key: HADOOP-17601 > URL:
[GitHub] [hadoop] hadoop-yetus commented on pull request #2835: HADOOP-17601. Upgrade Jackson databind in branch-2.10 to 2.9.10.7
hadoop-yetus commented on pull request #2835: URL: https://github.com/apache/hadoop/pull/2835#issuecomment-809784691 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 10m 57s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 15m 24s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 21s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 0m 18s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | mvnsite | 0m 22s | branch-2.10 passed | | +1 :green_heart: | javadoc | 0m 23s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 18s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 13s | the patch passed | | +1 :green_heart: | compile | 0m 14s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 0m 14s | the patch passed | | +1 :green_heart: | compile | 0m 12s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | javac | 0m 12s | the patch passed | | +1 :green_heart: | mvnsite | 0m 14s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 14s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 13s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 12s | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 33m 15s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2835/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2835 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 690657b0e940 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / 616256b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2835/1/testReport/ | | Max. process+thread count | 93 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2835/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108
[ https://issues.apache.org/jira/browse/HADOOP-17603?focusedWorklogId=573792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573792 ] ASF GitHub Bot logged work on HADOOP-17603: --- Author: ASF GitHub Bot Created on: 29/Mar/21 23:00 Start Date: 29/Mar/21 23:00 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2834: URL: https://github.com/apache/hadoop/pull/2834#issuecomment-809780644 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 17m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 15m 37s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 15s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 0m 14s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | mvnsite | 0m 17s | branch-2.10 passed | | +1 :green_heart: | javadoc | 0m 19s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 14s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 12s | the patch passed | | +1 :green_heart: | compile | 0m 12s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 0m 12s | the patch passed | | +1 :green_heart: | compile | 0m 10s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | javac | 0m 10s | the patch passed | | +1 :green_heart: | mvnsite | 0m 12s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 13s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 10s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 10s | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 38m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2834/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2834 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 8801b4d1fcdf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / 616256b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2834/1/testReport/ | | Max. process+thread count | 83 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2834/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573792) Time Spent: 20m (was: 10m) > Upgrade tomcat-embed-core to 7.0.108 > > > Key: HADOOP-17603 > URL:
[GitHub] [hadoop] hadoop-yetus commented on pull request #2834: HADOOP-17603. Upgrade tomcat-embed-core to 7.0.108
hadoop-yetus commented on pull request #2834: URL: https://github.com/apache/hadoop/pull/2834#issuecomment-809780644 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 17m 1s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 15m 37s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 15s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | compile | 0m 14s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | mvnsite | 0m 17s | branch-2.10 passed | | +1 :green_heart: | javadoc | 0m 19s | branch-2.10 passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 14s | branch-2.10 passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 12s | the patch passed | | +1 :green_heart: | compile | 0m 12s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javac | 0m 12s | the patch passed | | +1 :green_heart: | compile | 0m 10s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | +1 :green_heart: | javac | 0m 10s | the patch passed | | +1 :green_heart: | mvnsite | 0m 12s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 13s | the patch passed with JDK Azul Systems, Inc.-1.7.0_262-b10 | | +1 :green_heart: | javadoc | 0m 10s | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 10s | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 38m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2834/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2834 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 8801b4d1fcdf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / 616256b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~16.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2834/1/testReport/ | | Max. process+thread count | 83 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2834/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7
[ https://issues.apache.org/jira/browse/HADOOP-17601?focusedWorklogId=573775=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573775 ] ASF GitHub Bot logged work on HADOOP-17601: --- Author: ASF GitHub Bot Created on: 29/Mar/21 22:35 Start Date: 29/Mar/21 22:35 Worklog Time Spent: 10m Work Description: amahussein opened a new pull request #2835: URL: https://github.com/apache/hadoop/pull/2835 Upgrade Jackson databind in branch-2.10 from 2.9.10.6 to 2.9.10.7: https://issues.apache.org/jira/browse/HADOOP-17601 Two known vulnerabilities found in Jackson-databind: [CVE-2021-20190](https://nvd.nist.gov/vuln/detail/CVE-2021-20190) high severity [CVE-2020-25649](https://nvd.nist.gov/vuln/detail/CVE-2020-25649) high severity -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573775) Remaining Estimate: 0h Time Spent: 10m > Upgrade Jackson databind in branch-2.10 to 2.9.10.7 > --- > > Key: HADOOP-17601 > URL: https://issues.apache.org/jira/browse/HADOOP-17601 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17601.branch-2.10.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Two known vulnerabilities found in Jackson-databind > [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity > [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7
[ https://issues.apache.org/jira/browse/HADOOP-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17601: Labels: pull-request-available (was: ) > Upgrade Jackson databind in branch-2.10 to 2.9.10.7 > --- > > Key: HADOOP-17601 > URL: https://issues.apache.org/jira/browse/HADOOP-17601 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17601.branch-2.10.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Two known vulnerabilities found in Jackson-databind > [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity > [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein opened a new pull request #2835: HADOOP-17601. Upgrade Jackson databind in branch-2.10 to 2.9.10.7
amahussein opened a new pull request #2835: URL: https://github.com/apache/hadoop/pull/2835 Upgrade Jackson databind in branch-2.10 from 2.9.10.6 to 2.9.10.7: https://issues.apache.org/jira/browse/HADOOP-17601 Two known vulnerabilities found in Jackson-databind: [CVE-2021-20190](https://nvd.nist.gov/vuln/detail/CVE-2021-20190) high severity [CVE-2020-25649](https://nvd.nist.gov/vuln/detail/CVE-2020-25649) high severity -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108
[ https://issues.apache.org/jira/browse/HADOOP-17603?focusedWorklogId=573759=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573759 ] ASF GitHub Bot logged work on HADOOP-17603: --- Author: ASF GitHub Bot Created on: 29/Mar/21 22:20 Start Date: 29/Mar/21 22:20 Worklog Time Spent: 10m Work Description: amahussein opened a new pull request #2834: URL: https://github.com/apache/hadoop/pull/2834 https://issues.apache.org/jira/browse/HADOOP-17603 Upgrade tomcat-embed-core to 7.0.108 on branch-2.10 [CVE-2021-25329](https://nvd.nist.gov/vuln/detail/CVE-2021-25329) critical severity. Impact: [CVE-2020-9494](https://nvd.nist.gov/vuln/detail/CVE-2020-9494) 7.0.0-7.0.107 are all affected by the vulnerability. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573759) Remaining Estimate: 0h Time Spent: 10m > Upgrade tomcat-embed-core to 7.0.108 > > > Key: HADOOP-17603 > URL: https://issues.apache.org/jira/browse/HADOOP-17603 > Project: Hadoop Common > Issue Type: Bug > Components: build, security >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Attachments: HADOOP-17603.branch-2.10.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical > severity. > Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494] > 7.0.0-7.0.107 are all affected by the vulnerability. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108
[ https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17603: Labels: pull-request-available (was: ) > Upgrade tomcat-embed-core to 7.0.108 > > > Key: HADOOP-17603 > URL: https://issues.apache.org/jira/browse/HADOOP-17603 > Project: Hadoop Common > Issue Type: Bug > Components: build, security >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17603.branch-2.10.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical > severity. > Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494] > 7.0.0-7.0.107 are all affected by the vulnerability. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein opened a new pull request #2834: HADOOP-17603. Upgrade tomcat-embed-core to 7.0.108
amahussein opened a new pull request #2834: URL: https://github.com/apache/hadoop/pull/2834 https://issues.apache.org/jira/browse/HADOOP-17603 Upgrade tomcat-embed-core to 7.0.108 on branch-2.10 [CVE-2021-25329](https://nvd.nist.gov/vuln/detail/CVE-2021-25329) critical severity. Impact: [CVE-2020-9494](https://nvd.nist.gov/vuln/detail/CVE-2020-9494) 7.0.0-7.0.107 are all affected by the vulnerability. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2784: HDFS-15850. Superuser actions should be reported to external enforcers
xiaoyuyao commented on a change in pull request #2784: URL: https://github.com/apache/hadoop/pull/2784#discussion_r603641136 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -2316,7 +2317,7 @@ boolean truncate(String src, long newLength, String clientName, removeBlocks(toRemoveBlocks); toRemoveBlocks.clear(); } - logAuditEvent(true, operationName, src, null, r.getFileStatus()); + logAuditEvent(true, operationName, src, null, status); Review comment: I think you can just add a assert(r != null); before return at 2325. That should be good enough. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17608) TestKMS is flaky
[ https://issues.apache.org/jira/browse/HADOOP-17608?focusedWorklogId=573741=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573741 ] ASF GitHub Bot logged work on HADOOP-17608: --- Author: ASF GitHub Bot Created on: 29/Mar/21 21:44 Start Date: 29/Mar/21 21:44 Worklog Time Spent: 10m Work Description: xiaoyuyao commented on a change in pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#discussion_r603483652 ## File path: hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java ## @@ -550,7 +550,7 @@ public Void call() throws Exception { threadGroup.enumerate(threads); Review comment: Thanks @aajisaka for reporting the issue and the fix. I think the problem is Line 550: ThreadGroup#enumerate does not provide a reliable way to get all the threads based on the JDK document. This is problematic especially if the threads array is not big enough to hold all the threads returned, the extra will be silently skipped. In the case of the reloader thread is skipped, we will have a reloaderThead as null for NPE. The proposed fix may not completely solve the problem. I would suggest we use Apache common ThreadUtils.findThreadsByName to replace the code Line 547-563, which simplify the code with better handling of this case to avoid NPE. ```suggestion Collection result = ThreadUtils.findThreadsByName(SSL_RELOADER_THREAD_NAME); Assert.assertEquals(1, result.size()); Assert.assertTrue("Reloader is not alive", result.iterator().next().isAlive()); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573741) Time Spent: 0.5h (was: 20m) > TestKMS is flaky > > > Key: HADOOP-17608 > URL: https://issues.apache.org/jira/browse/HADOOP-17608 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: flaky-test, pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/460/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt] > The following https tests are flaky: > * testStartStopHttpsPseudo > * testStartStopHttpsKerberos > * testDelegationTokensOpsHttpsPseudo > {noformat} > [ERROR] > testStartStopHttpsPseudo(org.apache.hadoop.crypto.key.kms.server.TestKMS) > Time elapsed: 1.354 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:553) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS$1.call(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:258) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:235) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:230) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStop(TestKMS.java:534) > at > org.apache.hadoop.crypto.key.kms.server.TestKMS.testStartStopHttpsPseudo(TestKMS.java:634){noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #2828: HADOOP-17608. Fix NPE in TestKMS
xiaoyuyao commented on a change in pull request #2828: URL: https://github.com/apache/hadoop/pull/2828#discussion_r603483652 ## File path: hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java ## @@ -550,7 +550,7 @@ public Void call() throws Exception { threadGroup.enumerate(threads); Review comment: Thanks @aajisaka for reporting the issue and the fix. I think the problem is Line 550: ThreadGroup#enumerate does not provide a reliable way to get all the threads based on the JDK document. This is problematic especially if the threads array is not big enough to hold all the threads returned, the extra will be silently skipped. In the case of the reloader thread is skipped, we will have a reloaderThead as null for NPE. The proposed fix may not completely solve the problem. I would suggest we use Apache common ThreadUtils.findThreadsByName to replace the code Line 547-563, which simplify the code with better handling of this case to avoid NPE. ```suggestion Collection result = ThreadUtils.findThreadsByName(SSL_RELOADER_THREAD_NAME); Assert.assertEquals(1, result.size()); Assert.assertTrue("Reloader is not alive", result.iterator().next().isAlive()); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saintstack commented on pull request #2693: Hadoop 16524 - resubmission following some unit test fixes
saintstack commented on pull request #2693: URL: https://github.com/apache/hadoop/pull/2693#issuecomment-809722506 Build 5 had compilation error... from elsewhere? Checking local, all compiles now. Retrying... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17609) Make SM4 support optional for OpenSSL native code
[ https://issues.apache.org/jira/browse/HADOOP-17609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17609: Affects Version/s: 3.4.0 > Make SM4 support optional for OpenSSL native code > - > > Key: HADOOP-17609 > URL: https://issues.apache.org/jira/browse/HADOOP-17609 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > > openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 > because the SM4 is not enabled on the openssl package. We should not force > users to install OpenSSL from source code even if they do not use SM4 feature. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2825: HDFS-15928. Replace RAND_pseudo_bytes in rpc_engine.cc
hadoop-yetus commented on pull request #2825: URL: https://github.com/apache/hadoop/pull/2825#issuecomment-809714695 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 51s | | trunk passed | | +1 :green_heart: | compile | 2m 48s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 50s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 57m 53s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 2m 44s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | cc | 2m 44s | | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 0 new + 46 unchanged - 6 fixed = 46 total (was 52) | | +1 :green_heart: | golang | 2m 44s | | the patch passed | | +1 :green_heart: | javac | 2m 44s | | the patch passed | | +1 :green_heart: | compile | 2m 44s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | cc | 2m 44s | | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 0 new + 46 unchanged - 6 fixed = 46 total (was 52) | | +1 :green_heart: | golang | 2m 44s | | the patch passed | | +1 :green_heart: | javac | 2m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 58s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 114m 39s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 198m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2825/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2825 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 1bf992d32fe4 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3b68227666e24a579942e89d52a9e8f16b2c4240 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2825/2/testReport/ | | Max. process+thread count | 596 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2825/2/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=573722=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573722 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 29/Mar/21 21:00 Start Date: 29/Mar/21 21:00 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-809710473 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 20s | | https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/2807 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573722) Time Spent: 12.5h (was: 12h 20m) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 12.5h > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector
hadoop-yetus commented on pull request #2807: URL: https://github.com/apache/hadoop/pull/2807#issuecomment-809710473 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 20s | | https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/2807 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2792: HDFS-15909. Make fnmatch cross platform
hadoop-yetus commented on pull request #2792: URL: https://github.com/apache/hadoop/pull/2792#issuecomment-809707799 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 56s | | trunk passed | | +1 :green_heart: | compile | 2m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 44s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 57m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 2m 42s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | cc | 2m 42s | | the patch passed | | +1 :green_heart: | golang | 2m 42s | | the patch passed | | +1 :green_heart: | javac | 2m 42s | | the patch passed | | +1 :green_heart: | compile | 2m 42s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | cc | 2m 42s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/6/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 8 new + 44 unchanged - 8 fixed = 52 total (was 52) | | +1 :green_heart: | golang | 2m 42s | | the patch passed | | +1 :green_heart: | javac | 2m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 105m 44s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 188m 40s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2792 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux ec6fd672cccb 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 39ce02d2bf7f9e64dfbb2b23c023cb45deed55f5 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/6/testReport/ | | Max. process+thread count | 597 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2792/6/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at:
[GitHub] [hadoop] hadoop-yetus commented on pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
hadoop-yetus commented on pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#issuecomment-809689259 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 1s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | javac | 1m 17s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/3/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 4 new + 551 unchanged - 0 fixed = 555 total (was 551) | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | javac | 1m 10s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/3/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 4 new + 535 unchanged - 0 fixed = 539 total (was 535) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 56s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 465 unchanged - 0 fixed = 466 total (was 465) | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 319m 59s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 433m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestLeaseRecovery2 | | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2833: HDFS-15934: Make DirectoryScanner reconcile blocks batch size and int…
hadoop-yetus commented on pull request #2833: URL: https://github.com/apache/hadoop/pull/2833#issuecomment-809675715 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 36s | | trunk passed | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 20s | | trunk passed | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 9s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 55s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2833/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 484 unchanged - 0 fixed = 488 total (was 484) | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 43s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 45s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 234m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2833/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 320m 57s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2833/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2833 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 62820b092e29 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4bc31c8d1731e7fc88b514488136048d30d365ec | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK
[GitHub] [hadoop] hadoop-yetus commented on pull request #2826: HDFS-15929. Replace RAND_pseudo_bytes in util.cc
hadoop-yetus commented on pull request #2826: URL: https://github.com/apache/hadoop/pull/2826#issuecomment-809660209 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 19s | | trunk passed | | +1 :green_heart: | compile | 2m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 52s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 52m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 2m 47s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 2m 47s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/4/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 6 new + 40 unchanged - 12 fixed = 46 total (was 52) | | +1 :green_heart: | golang | 2m 47s | | the patch passed | | +1 :green_heart: | javac | 2m 47s | | the patch passed | | +1 :green_heart: | compile | 2m 44s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | cc | 2m 44s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/4/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 40 unchanged - 12 fixed = 46 total (was 52) | | +1 :green_heart: | golang | 2m 44s | | the patch passed | | +1 :green_heart: | javac | 2m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 32m 56s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 109m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2826 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 76eaf2a8d44d 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / eb4abff06be2cf3bd5094f770c8c7bd2cfb9e3e1 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/4/testReport/ | | Max. process+thread count | 650 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2824: HDFS-15927. Catch polymorphic type by reference
hadoop-yetus commented on pull request #2824: URL: https://github.com/apache/hadoop/pull/2824#issuecomment-809656954 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 31s | | trunk passed | | +1 :green_heart: | compile | 2m 40s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 43s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 51m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 18s | | the patch passed | | +1 :green_heart: | compile | 2m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 2m 31s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2824/2/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 2 new + 48 unchanged - 4 fixed = 50 total (was 52) | | +1 :green_heart: | golang | 2m 31s | | the patch passed | | +1 :green_heart: | javac | 2m 31s | | the patch passed | | +1 :green_heart: | compile | 2m 31s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | cc | 2m 31s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2824/2/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 44 unchanged - 8 fixed = 50 total (was 52) | | +1 :green_heart: | golang | 2m 31s | | the patch passed | | +1 :green_heart: | javac | 2m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 35m 11s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 109m 25s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2824/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2824 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux ccde94a4c725 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0879d0b79dfe918d3076c8c485279d9c79009bc1 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2824/2/testReport/ | | Max. process+thread count | 611 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output |
[jira] [Commented] (HADOOP-17610) DelegationTokenAuthenticator prints token information
[ https://issues.apache.org/jira/browse/HADOOP-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310883#comment-17310883 ] Hadoop QA commented on HADOOP-17610: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 26m 7s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 6s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 47s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 16s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 1s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 22m 54s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 2m 24s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 15m 1s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/175/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt{color} | {color:red} root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 1s{color} | {color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/175/artifact/out/patch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt{color} | {color:red} root in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 37s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 37s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 30s{color} |
[GitHub] [hadoop] ayushtkn commented on pull request #2823: HDFS-15926 : Remove duplicate dependency of hadoop-annotations
ayushtkn commented on pull request #2823: URL: https://github.com/apache/hadoop/pull/2823#issuecomment-809596501 Thanx @virajjasani for the contribution. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #2823: HDFS-15926 : Remove duplicate dependency of hadoop-annotations
ayushtkn merged pull request #2823: URL: https://github.com/apache/hadoop/pull/2823 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on pull request #2826: HDFS-15929. Replace RAND_pseudo_bytes in util.cc
GauthamBanasandra commented on pull request #2826: URL: https://github.com/apache/hadoop/pull/2826#issuecomment-809575058 ``` [WARNING] /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2826/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:126:40: warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated [-Wdeprecated-declarations] [WARNING] from /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2826/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:18: [WARNING] /usr/include/openssl/rand.h:44:1: note: declared here ``` @goiri the above warnings are fixed in this PR - https://github.com/apache/hadoop/pull/2825. I created a separate PR for replacing each instance of `RAND_pseudo_bytes` since they involve quite a bit of refactoring and didn't want to complicate the PR review process. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #2827: HDFS-15935. Use memcpy for copying non-null terminated string.
goiri merged pull request #2827: URL: https://github.com/apache/hadoop/pull/2827 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on pull request #2826: HDFS-15929. Replace RAND_pseudo_bytes in util.cc
goiri commented on pull request #2826: URL: https://github.com/apache/hadoop/pull/2826#issuecomment-809561959 How come we still see: ``` [WARNING] /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2826/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:126:40: warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated [-Wdeprecated-declarations] [WARNING] from /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2826/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:18: [WARNING] /usr/include/openssl/rand.h:44:1: note: declared here [WARNING] /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2826/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:126:40: warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated [-Wdeprecated-declarations] [WARNING] from /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2826/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/rpc_engine.cc:18: [WARNING] /usr/include/openssl/rand.h:44:1: note: declared here ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?focusedWorklogId=573598=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573598 ] ASF GitHub Bot logged work on HADOOP-11245: --- Author: ASF GitHub Bot Created on: 29/Mar/21 16:35 Start Date: 29/Mar/21 16:35 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#issuecomment-809528708 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 13m 16s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 8s | | trunk passed | | +1 :green_heart: | compile | 20m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 6s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 2s | | trunk passed | | +1 :green_heart: | javadoc | 2m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 56s | | the patch passed | | +1 :green_heart: | compile | 20m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 2s | | the patch passed | | +1 :green_heart: | compile | 17m 58s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 17m 58s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 45s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2832/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 23 new + 173 unchanged - 10 fixed = 196 total (was 183) | | +1 :green_heart: | mvnsite | 3m 0s | | the patch passed | | +1 :green_heart: | xml | 0m 4s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 28s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 58s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 0s | | hadoop-nfs in the patch passed. | | -1 :x: | unit | 230m 28s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2832/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 3m 10s | | hadoop-hdfs-nfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 7s | | The patch does not generate ASF License warnings. | | | | 431m 36s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSClientExcludedNodes | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2832/1/artifact/out/Dockerfile | | GITHUB PR |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2832: HADOOP-11245. Update NFS gateway to use Netty4
hadoop-yetus commented on pull request #2832: URL: https://github.com/apache/hadoop/pull/2832#issuecomment-809528708 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 13m 16s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 8s | | trunk passed | | +1 :green_heart: | compile | 20m 46s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 18m 6s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 2s | | trunk passed | | +1 :green_heart: | javadoc | 2m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 54s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 10s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 56s | | the patch passed | | +1 :green_heart: | compile | 20m 2s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 2s | | the patch passed | | +1 :green_heart: | compile | 17m 58s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 17m 58s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 45s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2832/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 23 new + 173 unchanged - 10 fixed = 196 total (was 183) | | +1 :green_heart: | mvnsite | 3m 0s | | the patch passed | | +1 :green_heart: | xml | 0m 4s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 28s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 58s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 0s | | hadoop-nfs in the patch passed. | | -1 :x: | unit | 230m 28s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2832/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 3m 10s | | hadoop-hdfs-nfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 7s | | The patch does not generate ASF License warnings. | | | | 431m 36s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSClientExcludedNodes | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2832/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2832 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle | | uname | Linux 795d2c780461 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk /
[GitHub] [hadoop] hadoop-yetus commented on pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
hadoop-yetus commented on pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#issuecomment-809497636 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 1s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 1m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 4s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 465 unchanged - 0 fixed = 466 total (was 465) | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 46s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 350m 56s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 450m 27s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.datanode.TestIncrementalBrVariations | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.TestDistributedFileSystem | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2831 | | Optional
[jira] [Updated] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Maroti updated HADOOP-17611: - Description: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occurs for large enough files, which are copied by splitting them up by distcp. In concatFileChunks before calling concat extract the modification time and apply that to the concatenated result-file after the concat. (probably best -after- before the rename()). was: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occurs for large enough files, which are copied by splitting them up by distcp. In concatFileChunks before calling concat extract the modification time and apply that to the concatenated result-file after the concat. (probably best after the rename()). > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > -after- before the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
hadoop-yetus commented on pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#issuecomment-809463892 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 10s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 57s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 465 unchanged - 0 fixed = 466 total (was 465) | | +1 :green_heart: | mvnsite | 1m 19s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 360m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 455m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.server.datanode.TestIncrementalBrVariations | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.TestLeaseRecovery2 | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.TestBlockScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2831 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux
[jira] [Updated] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Maroti updated HADOOP-17611: - Description: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occurs for large enough files, which are copied by splitting them up by distcp. In concatFileChunks before calling concat extract the modification time and apply that to the concatenated result-file after the concat. (probably best after the rename()). was: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occurs for large enough files, which are copied by splitting them up by distcp. In concatFileChunks before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated result-file after the concat. (probably best > after the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Maroti updated HADOOP-17611: - Description: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occurs for large enough files, which are copied by splitting them up by distcp. In concatFileChunks. before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). was: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occures for large enough files, which are copied by splitting them up by distcp. In concatFileChunks. before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks. before calling concat extract the modification time and > apply that to the concatenated resulting file after the concat. (probably > best after the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Maroti updated HADOOP-17611: - Description: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occures for large enough files, which are copied by splitting them up by distcp. In concatFileChunks. before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). was: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therfore the modification times of files copeid by distcp will not match the source files. However this only occures for large enough files, which are copied by splitting them up by distcp. In concatFileChunks. before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occures > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks. before calling concat extract the modification time and > apply that to the concatenated resulting file after the concat. (probably > best after the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17611) Distcp parallel file copy breaks the modification time
[ https://issues.apache.org/jira/browse/HADOOP-17611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Maroti updated HADOOP-17611: - Description: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occurs for large enough files, which are copied by splitting them up by distcp. In concatFileChunks before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). was: The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therefore the modification times of files copeid by distcp will not match the source files. However this only occurs for large enough files, which are copied by splitting them up by distcp. In concatFileChunks. before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). > Distcp parallel file copy breaks the modification time > -- > > Key: HADOOP-17611 > URL: https://issues.apache.org/jira/browse/HADOOP-17611 > Project: Hadoop Common > Issue Type: Bug >Reporter: Adam Maroti >Priority: Major > > The commit HADOOP-11794. Enable distcp to copy blocks in parallel. > (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of > large files. > > In CopyCommitter.java inside concatFileChunks Filesystem.concat is called > which changes the modification time therefore the modification times of files > copeid by distcp will not match the source files. However this only occurs > for large enough files, which are copied by splitting them up by distcp. > In concatFileChunks before calling concat extract the modification time and > apply that to the concatenated resulting file after the concat. (probably > best after the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17611) Distcp parallel file copy breaks the modification time
Adam Maroti created HADOOP-17611: Summary: Distcp parallel file copy breaks the modification time Key: HADOOP-17611 URL: https://issues.apache.org/jira/browse/HADOOP-17611 Project: Hadoop Common Issue Type: Bug Reporter: Adam Maroti The commit HADOOP-11794. Enable distcp to copy blocks in parallel. (bf3fb585aaf2b179836e139c041fc87920a3c886) broke the modification time of large files. In CopyCommitter.java inside concatFileChunks Filesystem.concat is called which changes the modification time therfore the modification times of files copeid by distcp will not match the source files. However this only occures for large enough files, which are copied by splitting them up by distcp. In concatFileChunks. before calling concat extract the modification time and apply that to the concatenated resulting file after the concat. (probably best after the rename()). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17610) DelegationTokenAuthenticator prints token information
[ https://issues.apache.org/jira/browse/HADOOP-17610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravuri Sushma sree updated HADOOP-17610: Attachment: HADOOP-17610.patch Status: Patch Available (was: Open) > DelegationTokenAuthenticator prints token information > - > > Key: HADOOP-17610 > URL: https://issues.apache.org/jira/browse/HADOOP-17610 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ravuri Sushma sree >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: HADOOP-17610.patch > > > Resource Manager logs print token information , as this is sensitive > information it must be exempted from being printed -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on pull request #2826: HDFS-15929. Replace RAND_pseudo_bytes in util.cc
GauthamBanasandra commented on pull request #2826: URL: https://github.com/apache/hadoop/pull/2826#issuecomment-809441562 This PR fixes some warnings reported as part of https://github.com/apache/hadoop/pull/2792. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on a change in pull request #2792: HDFS-15909. Make fnmatch cross platform
GauthamBanasandra commented on a change in pull request #2792: URL: https://github.com/apache/hadoop/pull/2792#discussion_r603357750 ## File path: hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/fs/filesystem.cc ## @@ -722,16 +722,16 @@ void FileSystemImpl::FindShim(const Status , const std::vector & for (StatInfo const& si : stat_infos) { //If we are at the last depth and it matches both path and name, we need to output it. if (operational_state->depth == shared_state->dirs.size() - 2 -&& !fnmatch(shared_state->dirs[operational_state->depth + 1].c_str(), si.path.c_str(), 0) -&& !fnmatch(shared_state->name.c_str(), si.path.c_str(), 0)) { +&& XPlatform::Syscall::FnMatch(shared_state->dirs[operational_state->depth + 1], si.path) +&& XPlatform::Syscall::FnMatch(shared_state->name, si.path)) { outputs.push_back(si); } //Skip if not directory if(si.file_type != StatInfo::IS_DIR) { continue; } //Checking for a match with the path at the current depth -if(!fnmatch(shared_state->dirs[operational_state->depth + 1].c_str(), si.path.c_str(), 0)){ + if(XPlatform::Syscall::FnMatch(shared_state->dirs[operational_state->depth + 1], si.path)) { Review comment: > XPlatform::Syscall::FnMatch is equivalent to !fnmatch? Yes. fnmatch returns 0 on a successful match. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas commented on pull request #2833: HDFS-15934: Make DirectoryScanner reconcile blocks batch size and int…
qizhu-lucas commented on pull request #2833: URL: https://github.com/apache/hadoop/pull/2833#issuecomment-809439075 @Hexiaoqiao @ayushtkn Could you help review this, when you are free? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra commented on pull request #2827: HDFS-15922. Use memcpy for copying non-null terminated string.
GauthamBanasandra commented on pull request #2827: URL: https://github.com/apache/hadoop/pull/2827#issuecomment-809438892 This PR fixes some warnings reported as part of https://github.com/apache/hadoop/pull/2792 in CI run https://github.com/apache/hadoop/pull/2792#issuecomment-803448506 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas opened a new pull request #2833: HDFS-15934: Make DirectoryScanner reconcile blocks batch size and int…
qizhu-lucas opened a new pull request #2833: URL: https://github.com/apache/hadoop/pull/2833 …erval between batch configurable. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
hadoop-yetus commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-809437824 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 23s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 44s | | trunk passed | | +1 :green_heart: | compile | 5m 8s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 48s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 59s | | trunk passed | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 45s | | the patch passed | | +1 :green_heart: | compile | 5m 5s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 5m 5s | | the patch passed | | +1 :green_heart: | compile | 4m 41s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 41s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 10s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/1/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 4 new + 225 unchanged - 9 fixed = 229 total (was 234) | | +1 :green_heart: | mvnsite | 1m 45s | | the patch passed | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 2m 2s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 4m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 347m 2s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 23m 34s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2830/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 488m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestPersistBlocks | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.federation.router.TestRouterFederationRename | | Subsystem | Report/Notes | |--:|:-| |
[jira] [Created] (HADOOP-17610) DelegationTokenAuthenticator prints token information
Ravuri Sushma sree created HADOOP-17610: --- Summary: DelegationTokenAuthenticator prints token information Key: HADOOP-17610 URL: https://issues.apache.org/jira/browse/HADOOP-17610 Project: Hadoop Common Issue Type: Bug Reporter: Ravuri Sushma sree Assignee: Ravuri Sushma sree Resource Manager logs print token information , as this is sensitive information it must be exempted from being printed -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas commented on pull request #2829: HDFS-15930: Fix some @param errors in DirectoryScanner.
qizhu-lucas commented on pull request #2829: URL: https://github.com/apache/hadoop/pull/2829#issuecomment-809401247 Thanks a lot @Hexiaoqiao @ayushtkn for review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
jianghuazhu commented on pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#issuecomment-809368241 @ayushtkn , I submitted some new code, can you review it for me? thank you very much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17548) ABFS: Toggle Store Mkdirs request overwrite parameter
[ https://issues.apache.org/jira/browse/HADOOP-17548?focusedWorklogId=573478=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573478 ] ASF GitHub Bot logged work on HADOOP-17548: --- Author: ASF GitHub Bot Created on: 29/Mar/21 13:05 Start Date: 29/Mar/21 13:05 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2781: URL: https://github.com/apache/hadoop/pull/2781#issuecomment-809360549 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 36s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 35s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 28s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 40s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 31s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 0m 59s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 15m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed | | +1 :green_heart: | spotbugs | 1m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 98m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2781/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2781 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b651d22c9165 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 0258df97c9fd65a9e8de5817a39e22ec9aa17ee8 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2781/3/testReport/ | | Max. process+thread count | 612 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2781/3/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573478) Time Spent: 1.5h (was: 1h 20m) > ABFS: Toggle Store Mkdirs request overwrite parameter > - > > Key: HADOOP-17548 > URL: https://issues.apache.org/jira/browse/HADOOP-17548 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > >
[GitHub] [hadoop] hadoop-yetus commented on pull request #2781: HADOOP-17548. ABFS: Toggle Store Mkdirs request overwrite parameter (#2729)
hadoop-yetus commented on pull request #2781: URL: https://github.com/apache/hadoop/pull/2781#issuecomment-809360549 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 36s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 35s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 28s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 40s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 31s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 0m 59s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 15m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed | | +1 :green_heart: | spotbugs | 1m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 52s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 98m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2781/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2781 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b651d22c9165 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 0258df97c9fd65a9e8de5817a39e22ec9aa17ee8 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2781/3/testReport/ | | Max. process+thread count | 612 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2781/3/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
virajjasani commented on pull request #2830: URL: https://github.com/apache/hadoop/pull/2830#issuecomment-809344870 Could you please take a look @ayushtkn @liuml07 ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #2823: HDFS-15926 : Remove duplicate dependency of hadoop-annotations
virajjasani commented on pull request #2823: URL: https://github.com/apache/hadoop/pull/2823#issuecomment-809343337 Thanks @ayushtkn , the change directly applies to branch-3.3, 3.2, 3.1 and 2.10. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #2770: HDFS-15892. Add metric for editPendingQ in FSEditLogAsync
tomscut commented on pull request #2770: URL: https://github.com/apache/hadoop/pull/2770#issuecomment-809305572 Hi @tasanuma @dineshchitlangia , could you please help review the code? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2775: MAPREDUCE-7329: HadoopPipes task has failed because of the ping timeout exception
steveloughran commented on a change in pull request #2775: URL: https://github.com/apache/hadoop/pull/2775#discussion_r595924047 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java ## @@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data) return SecureShuffleUtils.hashFromString(data, key); } + private class PingSocketCleaner extends Thread { +PingSocketCleaner(String name) { + super(name); +} + +@Override +public void run() { + LOG.info("PingSocketCleaner started..."); + while (true) { +Socket clientSocket = null; +try { + clientSocket = serverSocket.accept(); + if (LOG.isDebugEnabled()) { Review comment: SLF4J doesn't need the isDebugEnabled() wrappers unless the log is doing any complex work. But here it might be good to keep and have the log also note the socket address, e,g ```java LOG.debug("Connection received from {}", clientSocket.getInetAddress()); ``` ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java ## @@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data) return SecureShuffleUtils.hashFromString(data, key); } + private class PingSocketCleaner extends Thread { +PingSocketCleaner(String name) { + super(name); +} + +@Override +public void run() { + LOG.info("PingSocketCleaner started..."); + while (true) { +Socket clientSocket = null; +try { + clientSocket = serverSocket.accept(); + if (LOG.isDebugEnabled()) { +LOG.debug("Got one client socket..."); + } + int readData = clientSocket.getInputStream().read(); Review comment: so we accept() a connection, then read() one byte. If there is no data or there's an IOE, the socket is closed. But what if there is a byte()? what happens to the clientSocket? ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Application.java ## @@ -266,4 +271,39 @@ public static String createDigest(byte[] password, String data) return SecureShuffleUtils.hashFromString(data, key); } + private class PingSocketCleaner extends Thread { +PingSocketCleaner(String name) { + super(name); +} + +@Override +public void run() { + LOG.info("PingSocketCleaner started..."); + while (true) { +Socket clientSocket = null; +try { + clientSocket = serverSocket.accept(); + if (LOG.isDebugEnabled()) { +LOG.debug("Got one client socket..."); + } + int readData = clientSocket.getInputStream().read(); + if (readData == -1) { +if (LOG.isDebugEnabled()) { + LOG.debug("close socket cause client has closed."); +} +clientSocket.close(); + } +} catch (IOException exception) { + LOG.error("PingSocketCleaner exception", exception); + if (clientSocket != null) { Review comment: use org.apache.hadoop.io.IOUtils.cleanupWithLogger() -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17576) ABFS: Disable throttling update for auth failures
[ https://issues.apache.org/jira/browse/HADOOP-17576?focusedWorklogId=573399=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573399 ] ASF GitHub Bot logged work on HADOOP-17576: --- Author: ASF GitHub Bot Created on: 29/Mar/21 10:25 Start Date: 29/Mar/21 10:25 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2761: URL: https://github.com/apache/hadoop/pull/2761#issuecomment-809266036 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 55s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 79m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2761/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2761 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5f1fb3e18d5b 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2bef19045d3f290e8ff2063fc790d2deb500979c | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2761/11/testReport/ | | Max. process+thread count | 567 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2761/11/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message
[GitHub] [hadoop] hadoop-yetus commented on pull request #2761: HADOOP-17576. ABFS: Disable throttling update for auth failures
hadoop-yetus commented on pull request #2761: URL: https://github.com/apache/hadoop/pull/2761#issuecomment-809266036 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 55s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 79m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2761/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2761 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5f1fb3e18d5b 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2bef19045d3f290e8ff2063fc790d2deb500979c | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2761/11/testReport/ | | Max. process+thread count | 567 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2761/11/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[jira] [Assigned] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brahma Reddy Battula reassigned HADOOP-17593: - Assignee: lixianwei > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1, 3.4.0 >Reporter: Steve Loughran >Assignee: lixianwei >Priority: Major > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310551#comment-17310551 ] Brahma Reddy Battula commented on HADOOP-17593: --- Hi [~Rigenyi] , I added you as contributor to hadoop common project, now you can upload the patch. welcome to onboard. > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1, 3.4.0 >Reporter: Steve Loughran >Priority: Major > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310547#comment-17310547 ] lixianwei edited comment on HADOOP-17593 at 3/29/21, 10:13 AM: --- [~ste...@apache.org] I have patch for this and do not have permission to upload. was (Author: rigenyi): I have patch for this and do not have permission to upload. [~ste...@apache.org] > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1, 3.4.0 >Reporter: Steve Loughran >Priority: Major > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17593) hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive dependency
[ https://issues.apache.org/jira/browse/HADOOP-17593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310547#comment-17310547 ] lixianwei commented on HADOOP-17593: I have patch for this and do not have permission to upload. [~ste...@apache.org] > hadoop-huaweicloud and hadoop-cloud-storage to remove log4j as transitive > dependency > > > Key: HADOOP-17593 > URL: https://issues.apache.org/jira/browse/HADOOP-17593 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1, 3.4.0 >Reporter: Steve Loughran >Priority: Major > > Dependencies of hadoop-cloud-storage show that hadoop-huaweicloud is pulling > in logj4. > it should not/must not, at least, not if the huaweicloud can live without it. > * A version of log4j 2.,2 on the CP is only going to complicate lives > * once we can move onto it ourselves we need to be in control of versions > [INFO] \- org.apache.hadoop:hadoop-huaweicloud:jar:3.4.0-SNAPSHOT:compile > [INFO]\- com.huaweicloud:esdk-obs-java:jar:3.20.4.2:compile > [INFO] +- com.jamesmurty.utils:java-xmlbuilder:jar:1.2:compile > [INFO] +- com.squareup.okhttp3:okhttp:jar:3.14.2:compile > [INFO] +- org.apache.logging.log4j:log4j-core:jar:2.12.0:compile > [INFO] \- org.apache.logging.log4j:log4j-api:jar:2.12.0:compile -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-15327: Assignee: Wei-Chiu Chuang > Upgrade MR ShuffleHandler to use Netty4 > --- > > Key: HADOOP-15327 > URL: https://issues.apache.org/jira/browse/HADOOP-15327 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Wei-Chiu Chuang >Priority: Major > > This way, we can remove the dependencies on the netty3 (jboss.netty) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu commented on a change in pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
jianghuazhu commented on a change in pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#discussion_r603147685 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java ## @@ -640,7 +643,17 @@ private void reportStatus(String msg, boolean rightNow) { */ private class SafeModeMonitor implements Runnable { /** Interval in msec for checking safe mode. */ -private static final long RECHECK_INTERVAL = 1000; +private long recheckInterval = 1000; + +public SafeModeMonitor() { + Configuration conf = new HdfsConfiguration(); + long interval = conf.getLong( + DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, + DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT); + if (interval > 0) { +recheckInterval = interval; Review comment: @ayushtkn , thank you very much for your suggestions, I will submit some improved codes later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #2812: YARN-10712. Fix word errors in class comments
ayushtkn commented on pull request #2812: URL: https://github.com/apache/hadoop/pull/2812#issuecomment-809224733 Needs a rebase, the build failed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
ayushtkn commented on a change in pull request #2831: URL: https://github.com/apache/hadoop/pull/2831#discussion_r603138171 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java ## @@ -640,7 +643,17 @@ private void reportStatus(String msg, boolean rightNow) { */ private class SafeModeMonitor implements Runnable { /** Interval in msec for checking safe mode. */ -private static final long RECHECK_INTERVAL = 1000; +private long recheckInterval = 1000; + +public SafeModeMonitor() { + Configuration conf = new HdfsConfiguration(); Review comment: No need to create a new `conf` object, get the conf object passed on from `BlockManagerSafeMode` constructor and initialise the daemon inside the constructor itself. ``` smmthread = new Daemon(new SafeModeMonitor(conf)); ``` ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java ## @@ -366,6 +368,22 @@ public void testSafeModeMonitor() throws Exception { assertFalse(bmSafeMode.isInSafeMode()); } + @Test + public void testSafemodeRecheckIntervalValue() { +Configuration conf = new HdfsConfiguration(); +long interval = conf.getLong( +DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, +DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT); + +assertEquals(interval, DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT); + +conf.setLong(DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, 2000); +interval = conf.getLong( +DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, +DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT); +assertEquals(interval, 2000); + } Review comment: What is this test testing? The working of configuration class? That isn't required? What you need to test is the namenode is using the configured value as the recheck interval. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java ## @@ -640,7 +643,17 @@ private void reportStatus(String msg, boolean rightNow) { */ private class SafeModeMonitor implements Runnable { /** Interval in msec for checking safe mode. */ -private static final long RECHECK_INTERVAL = 1000; +private long recheckInterval = 1000; + +public SafeModeMonitor() { + Configuration conf = new HdfsConfiguration(); + long interval = conf.getLong( + DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, + DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT); + if (interval > 0) { +recheckInterval = interval; Review comment: Add a warn log telling the configured value is invalid and you are using the default value. Secondly as of now you are having two variables, `recheckInterval` & `interval`, Keep only `recheckInterval` and initialise it in the constructor, if the value is greater than 0, no issues, if it is not set it to `DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?focusedWorklogId=573373=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573373 ] ASF GitHub Bot logged work on HADOOP-11245: --- Author: ASF GitHub Bot Created on: 29/Mar/21 09:22 Start Date: 29/Mar/21 09:22 Worklog Time Spent: 10m Work Description: jojochuang opened a new pull request #2832: URL: https://github.com/apache/hadoop/pull/2832 ## NOTICE JIRA: https://issues.apache.org/jira/browse/HADOOP-11245 This is a draft. It passed unit tests but need functional tests to ensure things like memory leak, performance is good. Looking for additional pairs of eyes to help with the code review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573373) Remaining Estimate: 0h Time Spent: 10m > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-11245: Labels: pull-request-available (was: ) > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request #2832: HADOOP-11245. Update NFS gateway to use Netty4
jojochuang opened a new pull request #2832: URL: https://github.com/apache/hadoop/pull/2832 ## NOTICE JIRA: https://issues.apache.org/jira/browse/HADOOP-11245 This is a draft. It passed unit tests but need functional tests to ensure things like memory leak, performance is good. Looking for additional pairs of eyes to help with the code review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-11245) Update NFS gateway to use Netty4
[ https://issues.apache.org/jira/browse/HADOOP-11245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HADOOP-11245: Assignee: Wei-Chiu Chuang (was: Siyao Meng) > Update NFS gateway to use Netty4 > > > Key: HADOOP-11245 > URL: https://issues.apache.org/jira/browse/HADOOP-11245 > Project: Hadoop Common > Issue Type: Sub-task > Components: nfs >Reporter: Brandon Li >Assignee: Wei-Chiu Chuang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2829: HDFS-15930: Fix some @param errors in DirectoryScanner.
hadoop-yetus commented on pull request #2829: URL: https://github.com/apache/hadoop/pull/2829#issuecomment-809164953 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 44s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 44s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 5s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 230m 12s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2829/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 315m 29s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks | | | hadoop.hdfs.server.datanode.TestBlockRecovery | | | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2829/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2829 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 4cc5786ead65 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 464e00a308508d5833095a7da8e604c84fed4697 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2829/1/testReport/ | | Max. process+thread count | 3044 (vs. ulimit of 5500) | | modules | C:
[GitHub] [hadoop] jianghuazhu opened a new pull request #2831: HDFS-15920.Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured.
jianghuazhu opened a new pull request #2831: URL: https://github.com/apache/hadoop/pull/2831 …K_INTERVAL can be configured. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani opened a new pull request #2830: HDFS-15931 : Fix non-static inner classes for better memory management
virajjasani opened a new pull request #2830: URL: https://github.com/apache/hadoop/pull/2830 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org