[jira] [Resolved] (HADOOP-17327) NPE when starting MiniYARNCluster from hadoop-client-minicluster

2020-11-10 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun resolved HADOOP-17327.
---
Resolution: Fixed

> NPE when starting MiniYARNCluster from hadoop-client-minicluster
> 
>
> Key: HADOOP-17327
> URL: https://issues.apache.org/jira/browse/HADOOP-17327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Priority: Critical
>
> When starting MiniYARNCluster one could get the following exception:
> {code}
>   java.lang.NullPointerException:
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:72)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:122)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:616)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:122)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:327)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:96)
>   ...
> {code}
> Looking into the code, this is because we explicitly exclude resource files 
> under 
> {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/TERMINAL}},
>  and therefore this code in {{WebServer}} fails with NPE:
> {code}
> terminalParams.put("resourceBase", WebServer.class
> .getClassLoader().getResource("TERMINAL").toExternalForm());
> {code}
> Those who use {{hadoop-minicluster}} may not be affected because they'll also 
> need {{hadoop-yarn-server-nodemanager}} as an extra dependency, which 
> includes the resource files. On the other hand {{hadoop-client-minicluster}} 
> packages both test classes (e.g., {{MiniYARNCluster}}) as well as main 
> classes (e.g., {{ResourceManager}} and {{NodeManager}}) into a single shaded 
> jar. It should include these resource files for testing as well. Otherwise, 
> {{MiniYARNCluster}} is unusable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17327) NPE when starting MiniYARNCluster from hadoop-client-minicluster

2020-11-10 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229799#comment-17229799
 ] 

Chao Sun commented on HADOOP-17327:
---

This is fixed as part of HADOOP-17324. Closing now.

> NPE when starting MiniYARNCluster from hadoop-client-minicluster
> 
>
> Key: HADOOP-17327
> URL: https://issues.apache.org/jira/browse/HADOOP-17327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Priority: Critical
>
> When starting MiniYARNCluster one could get the following exception:
> {code}
>   java.lang.NullPointerException:
>   at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:72)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:122)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:616)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:122)
>   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:327)
>   at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
>   at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:96)
>   ...
> {code}
> Looking into the code, this is because we explicitly exclude resource files 
> under 
> {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/TERMINAL}},
>  and therefore this code in {{WebServer}} fails with NPE:
> {code}
> terminalParams.put("resourceBase", WebServer.class
> .getClassLoader().getResource("TERMINAL").toExternalForm());
> {code}
> Those who use {{hadoop-minicluster}} may not be affected because they'll also 
> need {{hadoop-yarn-server-nodemanager}} as an extra dependency, which 
> includes the resource files. On the other hand {{hadoop-client-minicluster}} 
> packages both test classes (e.g., {{MiniYARNCluster}}) as well as main 
> classes (e.g., {{ResourceManager}} and {{NodeManager}}) into a single shaded 
> jar. It should include these resource files for testing as well. Otherwise, 
> {{MiniYARNCluster}} is unusable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?focusedWorklogId=510129=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510129
 ]

ASF GitHub Bot logged work on HADOOP-17318:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 06:49
Start Date: 11/Nov/20 06:49
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on a change in pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#discussion_r521130866



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
##
@@ -585,7 +589,8 @@ public BulkOperationState initiateOperation(final Path path,
   @Retries.RetryTranslated
   public UploadPartResult uploadPart(UploadPartRequest request)
   throws IOException {
-return retry("upload part",
+return retry("upload part #" + request.getPartNumber()
++ " upload "+ request.getUploadId(),

Review comment:
   nit: s/upload/upload ID/
   
   I was thinking of consistent log keywords so taht for any retry log we can 
search "upload ID" or "commit ID"

##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
##
@@ -131,6 +131,8 @@ protected WriteOperationHelper(S3AFileSystem owner, 
Configuration conf) {
*/
   void operationRetried(String text, Exception ex, int retries,
   boolean idempotent) {
+LOG.info("{}: Retried {}: {}", retries, text, ex.toString());

Review comment:
   the order of parameter is wrong.

##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitProtocol.java
##
@@ -1430,6 +1450,255 @@ public void testParallelJobsToAdjacentPaths() throws 
Throwable {
 
   }
 
+
+  /**
+   * Run two jobs with the same destination and different output paths.
+   * 
+   * This only works if the jobs are set to NOT delete all outstanding
+   * uploads under the destination path.
+   * 
+   * See HADOOP-17318.
+   */
+  @Test
+  public void testParallelJobsToSameDestination() throws Throwable {
+
+describe("Run two jobs to the same destination, assert they both 
complete");
+Configuration conf = getConfiguration();
+conf.setBoolean(FS_S3A_COMMITTER_ABORT_PENDING_UPLOADS, false);
+
+// this job has a job ID generated and set as the spark UUID;
+// the config is also set to require it.
+// This mimics the Spark setup process.
+
+String stage1Id = UUID.randomUUID().toString();
+conf.set(SPARK_WRITE_UUID, stage1Id);
+conf.setBoolean(FS_S3A_COMMITTER_REQUIRE_UUID, true);
+
+// create the job and write data in its task attempt
+JobData jobData = startJob(true);
+Job job1 = jobData.job;
+AbstractS3ACommitter committer1 = jobData.committer;
+JobContext jContext1 = jobData.jContext;
+TaskAttemptContext tContext1 = jobData.tContext;
+Path job1TaskOutputFile = jobData.writtenTextPath;
+
+// the write path
+Assertions.assertThat(committer1.getWorkPath().toString())
+.describedAs("Work path path of %s", committer1)
+.contains(stage1Id);
+// now build up a second job
+String jobId2 = randomJobId();
+
+// second job will use same ID
+String attempt2 = taskAttempt0.toString();
+TaskAttemptID taskAttempt2 = taskAttempt0;
+
+// create the second job
+Configuration c2 = unsetUUIDOptions(new JobConf(conf));
+c2.setBoolean(FS_S3A_COMMITTER_REQUIRE_UUID, true);
+Job job2 = newJob(outDir,
+c2,
+attempt2);
+Configuration conf2 = job2.getConfiguration();

Review comment:
   nit: may call this `conf2` like `jobConf2` to make it a bit clearer.

##
File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md
##
@@ -535,20 +535,28 @@ Conflict management is left to the execution engine 
itself.
 
 | Option | Magic | Directory | Partitioned | Meaning | Default |
 ||---|---|-|-|-|
-| `mapreduce.fileoutputcommitter.marksuccessfuljobs` | X | X | X | Write a 
`_SUCCESS` file  at the end of each job | `true` |
+| `mapreduce.fileoutputcommitter.marksuccessfuljobs` | X | X | X | Write a 
`_SUCCESS` file on the successful completion of the job. | `true` |
+| `fs.s3a.buffer.dir` | X | X | X | Local filesystem directory for data being 
written and/or staged. | `${hadoop.tmp.dir}/s3a` |
+| `fs.s3a.committer.magic.enabled` | X |  | | Enable "magic committer" support 
in the filesystem. | `false` |
+| `fs.s3a.committer.abort.pending.uploads` | X | X | X | list and abort all 
pending uploads under the destination path when the job is committed or 
aborted. | `true` |
 | `fs.s3a.committer.threads` | X | X | X | Number of threads in committers for 
parallel operations on files. | 8 |
-| `fs.s3a.committer.staging.conflict-mode` |  | X | X | Conflict resolution: 
`fail`, 

[GitHub] [hadoop] liuml07 commented on a change in pull request #2399: HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID.

2020-11-10 Thread GitBox


liuml07 commented on a change in pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#discussion_r521130866



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
##
@@ -585,7 +589,8 @@ public BulkOperationState initiateOperation(final Path path,
   @Retries.RetryTranslated
   public UploadPartResult uploadPart(UploadPartRequest request)
   throws IOException {
-return retry("upload part",
+return retry("upload part #" + request.getPartNumber()
++ " upload "+ request.getUploadId(),

Review comment:
   nit: s/upload/upload ID/
   
   I was thinking of consistent log keywords so taht for any retry log we can 
search "upload ID" or "commit ID"

##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
##
@@ -131,6 +131,8 @@ protected WriteOperationHelper(S3AFileSystem owner, 
Configuration conf) {
*/
   void operationRetried(String text, Exception ex, int retries,
   boolean idempotent) {
+LOG.info("{}: Retried {}: {}", retries, text, ex.toString());

Review comment:
   the order of parameter is wrong.

##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitProtocol.java
##
@@ -1430,6 +1450,255 @@ public void testParallelJobsToAdjacentPaths() throws 
Throwable {
 
   }
 
+
+  /**
+   * Run two jobs with the same destination and different output paths.
+   * 
+   * This only works if the jobs are set to NOT delete all outstanding
+   * uploads under the destination path.
+   * 
+   * See HADOOP-17318.
+   */
+  @Test
+  public void testParallelJobsToSameDestination() throws Throwable {
+
+describe("Run two jobs to the same destination, assert they both 
complete");
+Configuration conf = getConfiguration();
+conf.setBoolean(FS_S3A_COMMITTER_ABORT_PENDING_UPLOADS, false);
+
+// this job has a job ID generated and set as the spark UUID;
+// the config is also set to require it.
+// This mimics the Spark setup process.
+
+String stage1Id = UUID.randomUUID().toString();
+conf.set(SPARK_WRITE_UUID, stage1Id);
+conf.setBoolean(FS_S3A_COMMITTER_REQUIRE_UUID, true);
+
+// create the job and write data in its task attempt
+JobData jobData = startJob(true);
+Job job1 = jobData.job;
+AbstractS3ACommitter committer1 = jobData.committer;
+JobContext jContext1 = jobData.jContext;
+TaskAttemptContext tContext1 = jobData.tContext;
+Path job1TaskOutputFile = jobData.writtenTextPath;
+
+// the write path
+Assertions.assertThat(committer1.getWorkPath().toString())
+.describedAs("Work path path of %s", committer1)
+.contains(stage1Id);
+// now build up a second job
+String jobId2 = randomJobId();
+
+// second job will use same ID
+String attempt2 = taskAttempt0.toString();
+TaskAttemptID taskAttempt2 = taskAttempt0;
+
+// create the second job
+Configuration c2 = unsetUUIDOptions(new JobConf(conf));
+c2.setBoolean(FS_S3A_COMMITTER_REQUIRE_UUID, true);
+Job job2 = newJob(outDir,
+c2,
+attempt2);
+Configuration conf2 = job2.getConfiguration();

Review comment:
   nit: may call this `conf2` like `jobConf2` to make it a bit clearer.

##
File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/committers.md
##
@@ -535,20 +535,28 @@ Conflict management is left to the execution engine 
itself.
 
 | Option | Magic | Directory | Partitioned | Meaning | Default |
 ||---|---|-|-|-|
-| `mapreduce.fileoutputcommitter.marksuccessfuljobs` | X | X | X | Write a 
`_SUCCESS` file  at the end of each job | `true` |
+| `mapreduce.fileoutputcommitter.marksuccessfuljobs` | X | X | X | Write a 
`_SUCCESS` file on the successful completion of the job. | `true` |
+| `fs.s3a.buffer.dir` | X | X | X | Local filesystem directory for data being 
written and/or staged. | `${hadoop.tmp.dir}/s3a` |
+| `fs.s3a.committer.magic.enabled` | X |  | | Enable "magic committer" support 
in the filesystem. | `false` |
+| `fs.s3a.committer.abort.pending.uploads` | X | X | X | list and abort all 
pending uploads under the destination path when the job is committed or 
aborted. | `true` |
 | `fs.s3a.committer.threads` | X | X | X | Number of threads in committers for 
parallel operations on files. | 8 |
-| `fs.s3a.committer.staging.conflict-mode` |  | X | X | Conflict resolution: 
`fail`, `append` or `replace`| `append` |
-| `fs.s3a.committer.staging.unique-filenames` |  | X | X | Generate unique 
filenames | `true` |
-| `fs.s3a.committer.magic.enabled` | X |  | | Enable "magic committer" support 
in the filesystem | `false` |
+| `fs.s3a.committer.generate.uuid` |  | X | X | Generate a Job UUID if none is 
passed down from Spark | `false` |
+| `fs.s3a.committer.require.uuid` |  | X | X | Require the Job UUID 

[jira] [Updated] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-11-10 Thread zhongjun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhongjun updated HADOOP-16492:
--
Attachment: HADOOP-16492.022.patch

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, 
> HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, 
> HADOOP-16492.016.patch, HADOOP-16492.017.patch, HADOOP-16492.018.patch, 
> HADOOP-16492.019.patch, HADOOP-16492.020.patch, HADOOP-16492.021.patch, 
> HADOOP-16492.022.patch, OBSA HuaweiCloud OBS Adapter for Hadoop Support.pdf
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16870?focusedWorklogId=510105=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510105
 ]

ASF GitHub Bot logged work on HADOOP-16870:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 04:58
Start Date: 11/Nov/20 04:58
Worklog Time Spent: 10m 
  Work Description: iwasakims commented on pull request #2454:
URL: https://github.com/apache/hadoop/pull/2454#issuecomment-725198663


   @aajisaka commands of spotbugs installed in Docker container do not have x 
permission. Could this cause any problem?
   ```
   centos@49e5870daabc:~/hadoop$  /opt/spotbugs/bin/convertXmlToText
   bash: /opt/spotbugs/bin/convertXmlToText: Permission denied
   
   
   centos@49e5870daabc:~/hadoop$ ls -lh /opt/spotbugs/bin
   total 104K
   -rw-rw-r--. 1 root root   77 Jan  2  1970 addMessages
   -rw-rw-r--. 1 root root  189 Jan  2  1970 computeBugHistory
   -rw-rw-r--. 1 root root   85 Jan  2  1970 convertXmlToText
   -rw-rw-r--. 1 root root   90 Jan  2  1970 copyBuggySource
   -rw-rw-r--. 1 root root  145 Jan  2  1970 defectDensity
   drwxrwxr-x. 2 root root   79 Jan  2  1970 deprecated
   drwxrwxr-x. 2 root root   98 Jan  2  1970 experimental
   -rw-rw-r--. 1 root root 4.2K Jan  2  1970 fb
   -rw-rw-r--. 1 root root 2.1K Jan  2  1970 fbwrap
   -rw-rw-r--. 1 root root  186 Jan  2  1970 filterBugs
   -rw-rw-r--. 1 root root   96 Jan  2  1970 findbugs-msv
   -rw-rw-r--. 1 root root   94 Jan  2  1970 listBugDatabaseInfo
   -rw-rw-r--. 1 root root   89 Jan  2  1970 mineBugHistory
   -rw-rw-r--. 1 root root   90 Jan  2  1970 printAppVersion
   -rw-rw-r--. 1 root root   87 Jan  2  1970 printClass
   -rw-rw-r--. 1 root root   98 Jan  2  1970 rejarForAnalysis
   -rw-rw-r--. 1 root root   93 Jan  2  1970 setBugDatabaseInfo
   -rw-rw-r--. 1 root root 4.3K Jan  2  1970 spotbugs
   -rw-rw-r--. 1 root root 6.8K Jan  2  1970 spotbugs.bat
   -rw-rw-r--. 1 root root 9.5K Jan  2  1970 spotbugs.ico
   -rw-rw-r--. 1 root root 3.2K Jan  2  1970 spotbugs2
   -rw-rw-r--. 1 root root  197 Jan  2  1970 unionBugs
   -rw-rw-r--. 1 root root   79 Jan  2  1970 xpathFind
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510105)
Time Spent: 50m  (was: 40m)

> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #2454: HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2020-11-10 Thread GitBox


iwasakims commented on pull request #2454:
URL: https://github.com/apache/hadoop/pull/2454#issuecomment-725198663


   @aajisaka commands of spotbugs installed in Docker container do not have x 
permission. Could this cause any problem?
   ```
   centos@49e5870daabc:~/hadoop$  /opt/spotbugs/bin/convertXmlToText
   bash: /opt/spotbugs/bin/convertXmlToText: Permission denied
   
   
   centos@49e5870daabc:~/hadoop$ ls -lh /opt/spotbugs/bin
   total 104K
   -rw-rw-r--. 1 root root   77 Jan  2  1970 addMessages
   -rw-rw-r--. 1 root root  189 Jan  2  1970 computeBugHistory
   -rw-rw-r--. 1 root root   85 Jan  2  1970 convertXmlToText
   -rw-rw-r--. 1 root root   90 Jan  2  1970 copyBuggySource
   -rw-rw-r--. 1 root root  145 Jan  2  1970 defectDensity
   drwxrwxr-x. 2 root root   79 Jan  2  1970 deprecated
   drwxrwxr-x. 2 root root   98 Jan  2  1970 experimental
   -rw-rw-r--. 1 root root 4.2K Jan  2  1970 fb
   -rw-rw-r--. 1 root root 2.1K Jan  2  1970 fbwrap
   -rw-rw-r--. 1 root root  186 Jan  2  1970 filterBugs
   -rw-rw-r--. 1 root root   96 Jan  2  1970 findbugs-msv
   -rw-rw-r--. 1 root root   94 Jan  2  1970 listBugDatabaseInfo
   -rw-rw-r--. 1 root root   89 Jan  2  1970 mineBugHistory
   -rw-rw-r--. 1 root root   90 Jan  2  1970 printAppVersion
   -rw-rw-r--. 1 root root   87 Jan  2  1970 printClass
   -rw-rw-r--. 1 root root   98 Jan  2  1970 rejarForAnalysis
   -rw-rw-r--. 1 root root   93 Jan  2  1970 setBugDatabaseInfo
   -rw-rw-r--. 1 root root 4.3K Jan  2  1970 spotbugs
   -rw-rw-r--. 1 root root 6.8K Jan  2  1970 spotbugs.bat
   -rw-rw-r--. 1 root root 9.5K Jan  2  1970 spotbugs.ico
   -rw-rw-r--. 1 root root 3.2K Jan  2  1970 spotbugs2
   -rw-rw-r--. 1 root root  197 Jan  2  1970 unionBugs
   -rw-rw-r--. 1 root root   79 Jan  2  1970 xpathFind
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-11-10 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229731#comment-17229731
 ] 

Hadoop QA commented on HADOOP-16492:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
1s{color} |  | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 21 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} |  | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
26s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
36s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
11s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 33s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  0m 
46s{color} |  | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} |  | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} |  | {color:blue} 
branch/hadoop-cloud-storage-project/hadoop-cloud-storage no findbugs output 
file (findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} |  | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | 
[/patch-mvninstall-hadoop-cloud-storage-project_hadoop-huaweicloud.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/114/artifact/out/patch-mvninstall-hadoop-cloud-storage-project_hadoop-huaweicloud.txt]
 | {color:red} hadoop-huaweicloud in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | 
[/patch-mvninstall-hadoop-cloud-storage-project.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/114/artifact/out/patch-mvninstall-hadoop-cloud-storage-project.txt]
 | {color:red} hadoop-cloud-storage-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | 
[/patch-mvninstall-hadoop-cloud-storage-project_hadoop-cloud-storage.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/114/artifact/out/patch-mvninstall-hadoop-cloud-storage-project_hadoop-cloud-storage.txt]
 | {color:red} hadoop-cloud-storage in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 23m 
28s{color} | 
[/patch-compile-root-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt|https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/114/artifact/out/patch-compile-root-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1.txt]
 | {color:red} root in the patch failed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1. {color} |
| 

[jira] [Updated] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2020-11-10 Thread Yongjun Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-17338:
---
Description: 
We are seeing the following two kinds of intermittent exceptions when using 
S3AInputSteam:

1.
{code:java}
Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: 
Premature end of Content-Length delimited message body (expected: 156463674; 
received: 150001089
at 
com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
at 
com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at 
org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
at 
org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
at 
org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
at 
org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
at 
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63)
at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
... 15 more
{code}
2.
{code:java}
Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596)
at sun.security.ssl.InputRecord.read(InputRecord.java:532)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
at 
com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
at 
com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at 
com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
at 
com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:70)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2361)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2493)
at 

[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.34

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=510091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510091
 ]

ASF GitHub Bot logged work on HADOOP-17371:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 03:25
Start Date: 11/Nov/20 03:25
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725115843


   In addition, TestKMS failed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510091)
Time Spent: 1h  (was: 50m)

> Bump Jetty to the latest version 9.4.34
> ---
>
> Key: HADOOP-17371
> URL: https://issues.apache.org/jira/browse/HADOOP-17371
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
> 9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2453: HADOOP-17371. Bump Jetty to the latest version 9.4.34. Contributed by Wei-Chiu Chuang.

2020-11-10 Thread GitBox


aajisaka commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725115843


   In addition, TestKMS failed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.34

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=510083=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510083
 ]

ASF GitHub Bot logged work on HADOOP-17371:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 02:51
Start Date: 11/Nov/20 02:51
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725095074


   No unit tests run in the Jenkins job. I'll run the unit tests in 
hadoop-common module locally before +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510083)
Time Spent: 50m  (was: 40m)

> Bump Jetty to the latest version 9.4.34
> ---
>
> Key: HADOOP-17371
> URL: https://issues.apache.org/jira/browse/HADOOP-17371
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
> 9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2453: HADOOP-17371. Bump Jetty to the latest version 9.4.34. Contributed by Wei-Chiu Chuang.

2020-11-10 Thread GitBox


aajisaka commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725095074


   No unit tests run in the Jenkins job. I'll run the unit tests in 
hadoop-common module locally before +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17358) Improve excessive reloading of Configurations

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17358?focusedWorklogId=510081=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510081
 ]

ASF GitHub Bot logged work on HADOOP-17358:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 02:48
Start Date: 11/Nov/20 02:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#issuecomment-725094031


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 25s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 23s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 17s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 22s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 48s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 171m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2436 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 689cb9dc303d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 375900049cc |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/4/testReport/ |
   | Max. process+thread count | 1401 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/4/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2436: HADOOP-17358. Improve excessive reloading of Configurations

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#issuecomment-725094031


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 25s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 23s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 17s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 22s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 48s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 171m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2436 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 689cb9dc303d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 375900049cc |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/4/testReport/ |
   | Max. process+thread count | 1401 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/4/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] ayushtkn commented on pull request #2443: HDFS-15659. MiniDFSCluster dfs.namenode.redundancy.considerLoad default to false

2020-11-10 Thread GitBox


ayushtkn commented on pull request #2443:
URL: https://github.com/apache/hadoop/pull/2443#issuecomment-725093714


   Thanx @amahussein the update. Changes looks good. Can you revert the last 
commit, I think the tests are stable. We can proceed here
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16870?focusedWorklogId=510078=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510078
 ]

ASF GitHub Bot logged work on HADOOP-16870:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 02:45
Start Date: 11/Nov/20 02:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2454:
URL: https://github.com/apache/hadoop/pull/2454#issuecomment-725093239


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2454/2/console in 
case of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510078)
Time Spent: 40m  (was: 0.5h)

> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2454: HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2454:
URL: https://github.com/apache/hadoop/pull/2454#issuecomment-725093239


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2454/2/console in 
case of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2020-11-10 Thread zhongjun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhongjun updated HADOOP-16492:
--
Attachment: HADOOP-16492.021.patch

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, 
> HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, 
> HADOOP-16492.016.patch, HADOOP-16492.017.patch, HADOOP-16492.018.patch, 
> HADOOP-16492.019.patch, HADOOP-16492.020.patch, HADOOP-16492.021.patch, OBSA 
> HuaweiCloud OBS Adapter for Hadoop Support.pdf
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16870?focusedWorklogId=510056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510056
 ]

ASF GitHub Bot logged work on HADOOP-16870:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 01:48
Start Date: 11/Nov/20 01:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2454:
URL: https://github.com/apache/hadoop/pull/2454#issuecomment-725075534


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  35m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  20m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |  29m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 41s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  51m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  22m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  19m 18s |  |  the patch passed  |
   | +1 :green_heart: |  hadolint  |   0m  5s |  |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  mvnsite  |  20m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m 33s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 590m 27s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2454/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 922m 15s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | hadoop.tools.dynamometer.TestDynamometerInfra |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2454: HADOOP-16870. Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2454:
URL: https://github.com/apache/hadoop/pull/2454#issuecomment-725075534


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  35m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  20m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |  29m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 41s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  51m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  22m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  19m 18s |  |  the patch passed  |
   | +1 :green_heart: |  hadolint  |   0m  5s |  |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  mvnsite  |  20m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m 33s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 590m 27s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2454/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 922m 15s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | hadoop.tools.dynamometer.TestDynamometerInfra |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2454/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2454 |
   | Optional Tests | dupname asflicense shellcheck shelldocs mvnsite unit 
hadolint compile javac javadoc mvninstall shadedclient xml |
   | uname | Linux 52763b7940be 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[jira] [Work logged] (HADOOP-17373) hadoop-client-integration-tests doesn't work when building with skipShade

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17373?focusedWorklogId=510050=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510050
 ]

ASF GitHub Bot logged work on HADOOP-17373:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 01:29
Start Date: 11/Nov/20 01:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2458:
URL: https://github.com/apache/hadoop/pull/2458#issuecomment-725069905


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  55m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  20m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 16s |  |  hadoop-client-integration-tests 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2458/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2458 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 079627d1afbe 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 375900049cc |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2458/1/testReport/ |
   | Max. process+thread count | 573 (vs. ulimit of 5500) |
   | modules | C: hadoop-client-modules/hadoop-client-integration-tests U: 
hadoop-client-modules/hadoop-client-integration-tests |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2458/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2458: HADOOP-17373. hadoop-client-integration-tests doesn't work when building with skipShade

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2458:
URL: https://github.com/apache/hadoop/pull/2458#issuecomment-725069905


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  55m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  20m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 16s |  |  hadoop-client-integration-tests 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2458/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2458 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 079627d1afbe 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 375900049cc |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2458/1/testReport/ |
   | Max. process+thread count | 573 (vs. ulimit of 5500) |
   | modules | C: hadoop-client-modules/hadoop-client-integration-tests U: 
hadoop-client-modules/hadoop-client-integration-tests |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2458/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] aajisaka opened a new pull request #2459: [DO NOT MERGE] Test "YETUS-1079. github-status-recovery is out of order" in Hadoop repo

2020-11-10 Thread GitBox


aajisaka opened a new pull request #2459:
URL: https://github.com/apache/hadoop/pull/2459


   Test https://github.com/apache/yetus/pull/200



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2138: HDFS-15469. Dynamically configure the size of PacketReceiver#MAX_PACKET_SIZE.

2020-11-10 Thread GitBox


sunchao commented on pull request #2138:
URL: https://github.com/apache/hadoop/pull/2138#issuecomment-725051805


   Test results looks good. Merging to master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao merged pull request #2138: HDFS-15469. Dynamically configure the size of PacketReceiver#MAX_PACKET_SIZE.

2020-11-10 Thread GitBox


sunchao merged pull request #2138:
URL: https://github.com/apache/hadoop/pull/2138


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2138: HDFS-15469. Dynamically configure the size of PacketReceiver#MAX_PACKET_SIZE.

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2138:
URL: https://github.com/apache/hadoop/pull/2138#issuecomment-725050922


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   3m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   3m  3s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 28s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   3m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   3m 36s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 50s | 
[/diff-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2138/6/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 2 new + 35 unchanged - 0 fixed = 
37 total (was 35)  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   5m 34s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 20s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  96m 33s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 211m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2138/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2138 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux e79decd213af 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fbd2220167f |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2138/6/testReport/ |
   | Max. process+thread count | 4522 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2457: HDFS-15680. disable broken azure test-units

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2457:
URL: https://github.com/apache/hadoop/pull/2457#issuecomment-725050968


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  55m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  84m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2457/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2457 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 06a00ae68cd1 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 375900049cc |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2457/1/testReport/ |
   | Max. process+thread count | 627 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2457/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2456:
URL: https://github.com/apache/hadoop/pull/2456#issuecomment-725046605


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   3m  8s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 47s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   4m  6s |  |  
hadoop-hdfs-project-jdkUbuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 generated 0 new + 774 unchanged - 5 
fixed = 774 total (was 779)  |
   | +1 :green_heart: |  compile  |   3m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   3m 56s |  |  
hadoop-hdfs-project-jdkPrivateBuild-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 with 
JDK Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 generated 0 new + 751 
unchanged - 5 fixed = 751 total (was 756)  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   5m 35s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 125m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2456/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 258m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId |
   |   | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
   |   | hadoop.hdfs.server.namenode.TestFileContextXAttr |
   |   | hadoop.hdfs.server.namenode.TestAddStripedBlocks |
   |   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.server.namenode.TestFileTruncate |
   |   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
   |   | hadoop.hdfs.server.namenode.TestCacheDirectives |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | 

[jira] [Comment Edited] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229595#comment-17229595
 ] 

Chao Sun edited comment on HADOOP-17324 at 11/11/20, 12:08 AM:
---

I created HADOOP-17373 and posted a PR there (let me know if an addendum is 
more suitable for this and I can do that too).


was (Author: csun):
I created HADOOP-17324 and posted a PR there (let me know if an addendum is 
more suitable for this and I can do that too).

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229595#comment-17229595
 ] 

Chao Sun commented on HADOOP-17324:
---

I created HADOOP-17324 and posted a PR there (let me know if an addendum is 
more suitable for this and I can do that too).

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17373) hadoop-client-integration-tests doesn't work when building with skipShade

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17373:

Labels: pull-request-available  (was: )

> hadoop-client-integration-tests doesn't work when building with skipShade
> -
>
> Key: HADOOP-17373
> URL: https://issues.apache.org/jira/browse/HADOOP-17373
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Compiling with skipShade:
> {code}
> mvn clean install -Pdist -DskipShade -DskipTests -Dtar -Danimal.sniffer.skip 
> -Dmaven.javadoc.skip=true
> {code}
> fails with
> {code}
> [ERROR] 
> /Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[47,37]
>  package org.apache.hadoop.yarn.server does not exist
> [ERROR] 
> /Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[59,11]
>  cannot find symbol
> [ERROR]   symbol:   class MiniYARNCluster
> [ERROR]   location: class org.apache.hadoop.example.ITUseMiniCluster
> [ERROR] 
> /Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[82,23]
>  cannot find symbol
> [ERROR]   symbol:   class MiniYARNCluster
> [ERROR]   location: class org.apache.hadoop.example.ITUseMiniCluster
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hadoop-client-integration-tests
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17373) hadoop-client-integration-tests doesn't work when building with skipShade

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17373?focusedWorklogId=510014=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510014
 ]

ASF GitHub Bot logged work on HADOOP-17373:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 00:06
Start Date: 11/Nov/20 00:06
Worklog Time Spent: 10m 
  Work Description: sunchao opened a new pull request #2458:
URL: https://github.com/apache/hadoop/pull/2458


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510014)
Remaining Estimate: 0h
Time Spent: 10m

> hadoop-client-integration-tests doesn't work when building with skipShade
> -
>
> Key: HADOOP-17373
> URL: https://issues.apache.org/jira/browse/HADOOP-17373
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Compiling with skipShade:
> {code}
> mvn clean install -Pdist -DskipShade -DskipTests -Dtar -Danimal.sniffer.skip 
> -Dmaven.javadoc.skip=true
> {code}
> fails with
> {code}
> [ERROR] 
> /Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[47,37]
>  package org.apache.hadoop.yarn.server does not exist
> [ERROR] 
> /Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[59,11]
>  cannot find symbol
> [ERROR]   symbol:   class MiniYARNCluster
> [ERROR]   location: class org.apache.hadoop.example.ITUseMiniCluster
> [ERROR] 
> /Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[82,23]
>  cannot find symbol
> [ERROR]   symbol:   class MiniYARNCluster
> [ERROR]   location: class org.apache.hadoop.example.ITUseMiniCluster
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :hadoop-client-integration-tests
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao opened a new pull request #2458: HADOOP-17373. hadoop-client-integration-tests doesn't work when building with skipShade

2020-11-10 Thread GitBox


sunchao opened a new pull request #2458:
URL: https://github.com/apache/hadoop/pull/2458


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17373) hadoop-client-integration-tests doesn't work when building with skipShade

2020-11-10 Thread Chao Sun (Jira)
Chao Sun created HADOOP-17373:
-

 Summary: hadoop-client-integration-tests doesn't work when 
building with skipShade
 Key: HADOOP-17373
 URL: https://issues.apache.org/jira/browse/HADOOP-17373
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chao Sun
Assignee: Chao Sun


Compiling with skipShade:
{code}
mvn clean install -Pdist -DskipShade -DskipTests -Dtar -Danimal.sniffer.skip 
-Dmaven.javadoc.skip=true
{code}

fails with
{code}
[ERROR] 
/Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[47,37]
 package org.apache.hadoop.yarn.server does not exist
[ERROR] 
/Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[59,11]
 cannot find symbol
[ERROR]   symbol:   class MiniYARNCluster
[ERROR]   location: class org.apache.hadoop.example.ITUseMiniCluster
[ERROR] 
/Users/chao/git/hadoop/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[82,23]
 cannot find symbol
[ERROR]   symbol:   class MiniYARNCluster
[ERROR]   location: class org.apache.hadoop.example.ITUseMiniCluster
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-client-integration-tests
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17358) Improve excessive reloading of Configurations

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17358?focusedWorklogId=510012=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510012
 ]

ASF GitHub Bot logged work on HADOOP-17358:
---

Author: ASF GitHub Bot
Created on: 11/Nov/20 00:01
Start Date: 11/Nov/20 00:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#issuecomment-725040811


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  18m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 20s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 18s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  20m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  18m 18s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 24s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  0s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 181m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2436 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 308c2b6e22bf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fbd2220167f |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/3/testReport/ |
   | Max. process+thread count | 2879 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2436: HADOOP-17358. Improve excessive reloading of Configurations

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#issuecomment-725040811


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  18m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 20s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 18s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  20m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  18m 18s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 24s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  0s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 181m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2436 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 308c2b6e22bf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fbd2220167f |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/3/testReport/ |
   | Max. process+thread count | 2879 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2436/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-17358) Improve excessive reloading of Configurations

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17358?focusedWorklogId=510010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510010
 ]

ASF GitHub Bot logged work on HADOOP-17358:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 23:56
Start Date: 10/Nov/20 23:56
Worklog Time Spent: 10m 
  Work Description: amahussein commented on a change in pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#discussion_r520949563



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
##
@@ -2876,12 +2876,28 @@ public Reader getConfResourceAsReader(String name) {
   protected synchronized Properties getProps() {
 if (properties == null) {
   properties = new Properties();
-  Map backup = updatingResource != null ?
-  new ConcurrentHashMap(updatingResource) : null;
-  loadResources(properties, resources, quietmode);
+  loadProps(properties, 0, true);
+}
+return properties;
+  }
 
+  /**
+   * Loads the resource at a given index into the properties.
+   * @param props the object containing the loaded properties.
+   * @param startIdx the index where the new resource has been added.
+   * @param fullReload flag whether we do complete reload of the conf instead
+   *   of just loading the new resource.
+   * @return the properties loaded from the resource.

Review comment:
   Sorry . I overlooked that.
   I added a new commit.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510010)
Time Spent: 1h 50m  (was: 1h 40m)

> Improve excessive reloading of Configurations
> -
>
> Key: HADOOP-17358
> URL: https://issues.apache.org/jira/browse/HADOOP-17358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> [~daryn] reported that adding a new resource to a conf forces a complete 
> reload of the conf instead of just loading the new resource. Instantiating a 
> {{SSLFactory}} adds a new resource for the ssl client/server file. Formerly 
> only the KMS client used the SSLFactory but now TLS/RPC uses it too.
> The reload is so costly that RM token cancellation falls behind by hours or 
> days. The accumulation of uncancelled tokens in the KMS rose from a few 
> thousand to hundreds of thousands which risks ZK scalability issues causing a 
> KMS outage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on a change in pull request #2436: HADOOP-17358. Improve excessive reloading of Configurations

2020-11-10 Thread GitBox


amahussein commented on a change in pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#discussion_r520949563



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
##
@@ -2876,12 +2876,28 @@ public Reader getConfResourceAsReader(String name) {
   protected synchronized Properties getProps() {
 if (properties == null) {
   properties = new Properties();
-  Map backup = updatingResource != null ?
-  new ConcurrentHashMap(updatingResource) : null;
-  loadResources(properties, resources, quietmode);
+  loadProps(properties, 0, true);
+}
+return properties;
+  }
 
+  /**
+   * Loads the resource at a given index into the properties.
+   * @param props the object containing the loaded properties.
+   * @param startIdx the index where the new resource has been added.
+   * @param fullReload flag whether we do complete reload of the conf instead
+   *   of just loading the new resource.
+   * @return the properties loaded from the resource.

Review comment:
   Sorry . I overlooked that.
   I added a new commit.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229576#comment-17229576
 ] 

Chao Sun commented on HADOOP-17324:
---

Ah I think we'll need to add the dependency in:
{code}

  noshade
  
skipShade
  
  

  org.apache.hadoop
  hadoop-common
  test


  org.apache.hadoop
  hadoop-hdfs-client
  test


  org.apache.hadoop
  hadoop-hdfs
  test
  test-jar

  

{code}

for the {{hadoop-client-integration-tests}}. Let me work on that.

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17358) Improve excessive reloading of Configurations

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17358?focusedWorklogId=510009=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510009
 ]

ASF GitHub Bot logged work on HADOOP-17358:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 23:50
Start Date: 10/Nov/20 23:50
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#discussion_r520947632



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
##
@@ -2876,12 +2876,28 @@ public Reader getConfResourceAsReader(String name) {
   protected synchronized Properties getProps() {
 if (properties == null) {
   properties = new Properties();
-  Map backup = updatingResource != null ?
-  new ConcurrentHashMap(updatingResource) : null;
-  loadResources(properties, resources, quietmode);
+  loadProps(properties, 0, true);
+}
+return properties;
+  }
 
+  /**
+   * Loads the resource at a given index into the properties.
+   * @param props the object containing the loaded properties.
+   * @param startIdx the index where the new resource has been added.
+   * @param fullReload flag whether we do complete reload of the conf instead
+   *   of just loading the new resource.
+   * @return the properties loaded from the resource.

Review comment:
   Could you also remove this @return? I am +1 once this is updated.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510009)
Time Spent: 1h 40m  (was: 1.5h)

> Improve excessive reloading of Configurations
> -
>
> Key: HADOOP-17358
> URL: https://issues.apache.org/jira/browse/HADOOP-17358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> [~daryn] reported that adding a new resource to a conf forces a complete 
> reload of the conf instead of just loading the new resource. Instantiating a 
> {{SSLFactory}} adds a new resource for the ssl client/server file. Formerly 
> only the KMS client used the SSLFactory but now TLS/RPC uses it too.
> The reload is so costly that RM token cancellation falls behind by hours or 
> days. The accumulation of uncancelled tokens in the KMS rose from a few 
> thousand to hundreds of thousands which risks ZK scalability issues causing a 
> KMS outage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229575#comment-17229575
 ] 

Chao Sun commented on HADOOP-17324:
---

[~epayne] hmm can you do a clean install? 

{code}
mvn clean install -Pdist -DskipShade -DskipTests -Dtar -Danimal.sniffer.skip 
-Dmaven.javadoc.skip=true
{code}

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #2436: HADOOP-17358. Improve excessive reloading of Configurations

2020-11-10 Thread GitBox


jojochuang commented on a change in pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#discussion_r520947632



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
##
@@ -2876,12 +2876,28 @@ public Reader getConfResourceAsReader(String name) {
   protected synchronized Properties getProps() {
 if (properties == null) {
   properties = new Properties();
-  Map backup = updatingResource != null ?
-  new ConcurrentHashMap(updatingResource) : null;
-  loadResources(properties, resources, quietmode);
+  loadProps(properties, 0, true);
+}
+return properties;
+  }
 
+  /**
+   * Loads the resource at a given index into the properties.
+   * @param props the object containing the loaded properties.
+   * @param startIdx the index where the new resource has been added.
+   * @param fullReload flag whether we do complete reload of the conf instead
+   *   of just loading the new resource.
+   * @return the properties loaded from the resource.

Review comment:
   Could you also remove this @return? I am +1 once this is updated.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.34

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=510007=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-510007
 ]

ASF GitHub Bot logged work on HADOOP-17371:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 23:46
Start Date: 10/Nov/20 23:46
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725035457


   The latest one is good. @aajisaka do you want to take another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 510007)
Time Spent: 40m  (was: 0.5h)

> Bump Jetty to the latest version 9.4.34
> ---
>
> Key: HADOOP-17371
> URL: https://issues.apache.org/jira/browse/HADOOP-17371
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The Hadoop 3 branches are on 9.4.20. We should update to the latest version: 
> 9.4.34



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2453: HADOOP-17371. Bump Jetty to the latest version 9.4.34. Contributed by Wei-Chiu Chuang.

2020-11-10 Thread GitBox


jojochuang commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725035457


   The latest one is good. @aajisaka do you want to take another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229572#comment-17229572
 ] 

Eric Payne commented on HADOOP-17324:
-

[~csun], [~dongjoon], and [~viirya],

After this JIRA was committed (revision # 
2522bf2f9b0c720eab099fef27bd3d22460ad5d0), I am seeing the following 
compilation errors:
{noformat}
[ERROR] 
/home/ericp/hadoop/source/current/orig/hadoop-client-modules/hadoop-client-integration-tests/src/test/java/org/apache/hadoop/example/ITUseMiniCluster.java:[59,11]
 cannot find symbol
[ERROR]   symbol:   class MiniYARNCluster
{noformat}
I'm using the following mvn command to build:
{noformat}
mvn install -Pdist -DskipShade -DskipTests -Dtar -Danimal.sniffer.skip 
-Dmaven.javadoc.skip=true
{noformat}

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2457: HDFS-15680. disable broken azure test-units

2020-11-10 Thread GitBox


amahussein opened a new pull request #2457:
URL: https://github.com/apache/hadoop/pull/2457


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.34

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=509974=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509974
 ]

ASF GitHub Bot logged work on HADOOP-17371:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 22:17
Start Date: 10/Nov/20 22:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725001079


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  90m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  1s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 34s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 36s |  |  hadoop-client-minicluster in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2453/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2453 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux ed59d4423fb5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 61f8c5767e8 |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2453/2/testReport/ |
   | Max. process+thread count | 545 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-client-modules/hadoop-client-minicluster U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2453/2/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2453: HADOOP-17371. Bump Jetty to the latest version 9.4.34. Contributed by Wei-Chiu Chuang.

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2453:
URL: https://github.com/apache/hadoop/pull/2453#issuecomment-725001079


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  90m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  1s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 34s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 36s |  |  hadoop-client-minicluster in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2453/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2453 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux ed59d4423fb5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 61f8c5767e8 |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2453/2/testReport/ |
   | Max. process+thread count | 545 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-client-modules/hadoop-client-minicluster U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2453/2/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:

[GitHub] [hadoop] amahussein commented on pull request #2443: HDFS-15659. MiniDFSCluster dfs.namenode.redundancy.considerLoad default to false

2020-11-10 Thread GitBox


amahussein commented on pull request #2443:
URL: https://github.com/apache/hadoop/pull/2443#issuecomment-724992363


   > Thanx for the update, the approach and changes looks good. But I think 
Jenkins won't have run all the tests? If not, you can touch a line in 
hadoop-project/pom.xml and increase the timeout in Jenkinsfile from 20 to 30 
hrs, if post that everything is good we can merge the current change.
   
   Thanks @ayushtkn, I made the change to the PR.
   The failing unit tests are the usual suspects. Even {{TestDynamometerInfra}} 
started to show up in all commits since yesterday.
   
   > Out of curiosity: You don't need a new PR for a new change, you could have 
force-pushed to your previous branch itself, it would have updated the same PR.
   
   Yeah, thanks. I know I could force commits but I did not want to lose track 
of the previous implementation.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17372) S3A AWS Credential provider loading gets confused with isolated classloaders

2020-11-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229527#comment-17229527
 ] 

Steve Loughran edited comment on HADOOP-17372 at 11/10/20, 9:38 PM:


setting com.amazonaws in "spark.sql.hive.metastore.sharedPrefixes" makes this 
go away

At the same time, I think we could move the env variables provider to something 
under o.a.h so that by default everything works...you'd only need to play with 
this setting in the specific case "you are doing custom plugin stuff"


was (Author: ste...@apache.org):
setting com.aws in "spark.sql.hive.metastore.sharedPrefixes" should be enough. 

> S3A AWS Credential provider loading gets confused with isolated classloaders
> 
>
> Key: HADOOP-17372
> URL: https://issues.apache.org/jira/browse/HADOOP-17372
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> Problem: exception in loading S3A credentials for an FS, "Class class 
> com.amazonaws.auth.EnvironmentVariableCredentialsProvider does not implement 
> AWSCredentialsProvider"
> Location: S3A + Spark dataframes test
> Hypothesised cause:
> Configuration.getClasses() uses the context classloader, and with the spark 
> isolated CL that's different from the one the s3a FS uses, so it can't load 
> AWS credential providers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17372) S3A AWS Credential provider loading gets confused with isolated classloaders

2020-11-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229527#comment-17229527
 ] 

Steve Loughran commented on HADOOP-17372:
-

setting com.aws in "spark.sql.hive.metastore.sharedPrefixes" should be enough. 

> S3A AWS Credential provider loading gets confused with isolated classloaders
> 
>
> Key: HADOOP-17372
> URL: https://issues.apache.org/jira/browse/HADOOP-17372
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> Problem: exception in loading S3A credentials for an FS, "Class class 
> com.amazonaws.auth.EnvironmentVariableCredentialsProvider does not implement 
> AWSCredentialsProvider"
> Location: S3A + Spark dataframes test
> Hypothesised cause:
> Configuration.getClasses() uses the context classloader, and with the spark 
> isolated CL that's different from the one the s3a FS uses, so it can't load 
> AWS credential providers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2306: HDFS-15237 Get checksum of EC file failed, when some block is missing…

2020-11-10 Thread GitBox


jojochuang commented on pull request #2306:
URL: https://github.com/apache/hadoop/pull/2306#issuecomment-724968757


   This is duplicated by HDFS-15643.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang closed pull request #2306: HDFS-15237 Get checksum of EC file failed, when some block is missing…

2020-11-10 Thread GitBox


jojochuang closed pull request #2306:
URL: https://github.com/apache/hadoop/pull/2306


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2138: HDFS-15469. Dynamically configure the size of PacketReceiver#MAX_PACKET_SIZE.

2020-11-10 Thread GitBox


sunchao commented on pull request #2138:
URL: https://github.com/apache/hadoop/pull/2138#issuecomment-724966448


   Oops sorry I forgot to commit this. Thanks @jojochuang for picking it up!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2351: HDFS-15608. Rename variable DistCp#CLEANUP.

2020-11-10 Thread GitBox


jojochuang merged pull request #2351:
URL: https://github.com/apache/hadoop/pull/2351


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2138: HDFS-15469. Dynamically configure the size of PacketReceiver#MAX_PACKET_SIZE.

2020-11-10 Thread GitBox


jojochuang commented on pull request #2138:
URL: https://github.com/apache/hadoop/pull/2138#issuecomment-724962754


   looks almost good barring some nitty checkstyle warnings. Don't think the 
javadoc errors are related but i'll kick off a rebuild to verify.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17372) S3A AWS Credential provider loading gets confused with isolated classloaders

2020-11-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229521#comment-17229521
 ] 

Steve Loughran commented on HADOOP-17372:
-

Probable cause
* cluster fs = s3a
* FileSystem.get of default fs is using the HiveConf config (loaded in 
isolation, with its classloader referenced)
* S3A FS init creates a new Configuration object, but it copies the classloader 
ref of the hive conf

list of classes to load is effectively
{code}
conf.getClasses("fs.s3a.aws.credentials.provider", 
  "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider,
  org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
  com.amazonaws.auth.EnvironmentVariableCredentialsProvider,
  org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider
{code}

the spark isolated CL for hive passes the o.a.h. classes through fine, but 
com.amazon one is being loaded in the hive CL, and so the 
EnvironmentVariableCredentialsProvider really isn't a valid provider

workaround: subclass that provider into org.a.h.fs.s3a.auth

but: all other s3a extension points (signer, delegation token provider) need to 
reference implementation classes in org.apache.hadoop, or there is a risk that 
spark can't load them through hive -at least if s3a is made the cluster FS.

> S3A AWS Credential provider loading gets confused with isolated classloaders
> 
>
> Key: HADOOP-17372
> URL: https://issues.apache.org/jira/browse/HADOOP-17372
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> Problem: exception in loading S3A credentials for an FS, "Class class 
> com.amazonaws.auth.EnvironmentVariableCredentialsProvider does not implement 
> AWSCredentialsProvider"
> Location: S3A + Spark dataframes test
> Hypothesised cause:
> Configuration.getClasses() uses the context classloader, and with the spark 
> isolated CL that's different from the one the s3a FS uses, so it can't load 
> AWS credential providers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=509927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509927
 ]

ASF GitHub Bot logged work on HADOOP-17338:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 20:51
Start Date: 10/Nov/20 20:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2455:
URL: https://github.com/apache/hadoop/pull/2455#issuecomment-724958810


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   1m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/diff-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2455/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 0 unchanged - 0 fixed 
= 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  79m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2455/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2455 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 15a766bbb014 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 61f8c5767e8 |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2455/1/testReport/ |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2455: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2455:
URL: https://github.com/apache/hadoop/pull/2455#issuecomment-724958810


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   1m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/diff-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2455/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 3 new + 0 unchanged - 0 fixed 
= 3 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  79m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2455/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2455 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 15a766bbb014 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 61f8c5767e8 |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2455/1/testReport/ |
   | Max. process+thread count | 535 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2455/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[GitHub] [hadoop] hadoop-yetus commented on pull request #2399: HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID.

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-724951030







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?focusedWorklogId=509919=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509919
 ]

ASF GitHub Bot logged work on HADOOP-17318:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 20:34
Start Date: 10/Nov/20 20:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-724951030







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509919)
Time Spent: 3h 40m  (was: 3.5h)

> S3A committer to support concurrent jobs with same app attempt ID & dest dir
> 
>
> Key: HADOOP-17318
> URL: https://issues.apache.org/jira/browse/HADOOP-17318
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Reported failure of magic committer block uploads as pending upload ID is 
> unknown. Likely cause: it's been aborted by another job
> # Make it possible to turn off cleanup of pending uploads in magic committer
> # log more about uploads being deleted in committers
> # and upload ID in the S3aBlockOutputStream errors
> There are other concurrency issues when you look close, see SPARK-33230
> * magic committer uses app attempt ID as path under __magic; if there are 
> duplicate then they will conflict
> * staging committer local temp dir uses app attempt id
> Fix will be to have a job UUID which for spark will be picked up from the 
> SPARK-33230 changes, (option to self-generate in job setup for hadoop 3.3.1+ 
> older spark builds); fall back to app-attempt *unless that fallback has been 
> disabled*
> MR: configure to use app attempt ID
> Spark: configure to fail job setup if app attempt ID is the source of a job 
> uuid



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17365) Contract test for renaming over existing file is too lenient

2020-11-10 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HADOOP-17365:
--
Status: Patch Available  (was: In Progress)

> Contract test for renaming over existing file is too lenient
> 
>
> Key: HADOOP-17365
> URL: https://issues.apache.org/jira/browse/HADOOP-17365
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{AbstractContractRenameTest#testRenameFileOverExistingFile}} is too lenient 
> in its assertions.
> * {{FileAlreadyExistsException}} is accepted regardless of "rename 
> overwrites" and "rename returns false if exists" contract options.  I think 
> it should be accepted only if both of those options are false.
> * "rename returns false if exists" option is ignored if the file is not 
> overwritten by the implementation.
> Also, I think the "rename returns false if exists" option is incorrectly 
> inverted in the test, which it can get away with because the checks are loose.
> (Found this while looking at a change in Ozone FS implementation from 
> throwing exception to returning false.  The contract test unexpectedly passed 
> without changing {{contract.xml}}.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17362) Doing hadoop ls on Har file triggers too many RPC calls

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17362?focusedWorklogId=509915=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509915
 ]

ASF GitHub Bot logged work on HADOOP-17362:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 20:24
Start Date: 10/Nov/20 20:24
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #2444:
URL: https://github.com/apache/hadoop/pull/2444#issuecomment-724946131


   @jojochuang Can you please take a look at that change?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509915)
Time Spent: 1h  (was: 50m)

> Doing hadoop ls on Har file triggers too many RPC calls
> ---
>
> Key: HADOOP-17362
> URL: https://issues.apache.org/jira/browse/HADOOP-17362
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [~daryn] has noticed that Invoking hadoop ls on HAR is taking too much of 
> time.
> The har system has multiple deficiencies that significantly impacted 
> performance:
> # Parsing the master index references ranges within the archive index. Each 
> range required re-opening the hdfs input stream and seeking to the same 
> location where it previously stopped.
> # Listing a har stats the archive index for every "directory". The per-call 
> cache used a unique key for each stat, rendering the cache useless and 
> significantly increasing memory pressure.
> # Determining the children of a directory scans the entire archive contents 
> and filters out children. The cached metadata already stores the exact child 
> list.
> # Globbing a har's contents resulted in unnecessary stats for every leaf path.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2444: HADOOP-17362. reduce RPC calls doing ls on HAR file

2020-11-10 Thread GitBox


amahussein commented on pull request #2444:
URL: https://github.com/apache/hadoop/pull/2444#issuecomment-724946131


   @jojochuang Can you please take a look at that change?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17367) Add InetAddress api to ProxyUsers.authorize

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17367?focusedWorklogId=509913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509913
 ]

ASF GitHub Bot logged work on HADOOP-17367:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 20:22
Start Date: 10/Nov/20 20:22
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #2449:
URL: https://github.com/apache/hadoop/pull/2449#issuecomment-724945077


   `hadoop.tools.dynamometer.TestDynamometerInfra` is failing on trunk from Nov 
9th. It cannot find tar file.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509913)
Time Spent: 40m  (was: 0.5h)

> Add InetAddress api to ProxyUsers.authorize
> ---
>
> Key: HADOOP-17367
> URL: https://issues.apache.org/jira/browse/HADOOP-17367
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Improve the ProxyUsers implementation by passing the address of the remote 
> peer to avoid resolving the hostname.
> Similarly, this requires adding InetAddress api to MachineList.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2449: HADOOP-17367. Add InetAddress api to ProxyUsers.authorize

2020-11-10 Thread GitBox


amahussein commented on pull request #2449:
URL: https://github.com/apache/hadoop/pull/2449#issuecomment-724945077


   `hadoop.tools.dynamometer.TestDynamometerInfra` is failing on trunk from Nov 
9th. It cannot find tar file.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2450: YARN-10485. TimelineConnector swallows InterruptedException

2020-11-10 Thread GitBox


amahussein commented on pull request #2450:
URL: https://github.com/apache/hadoop/pull/2450#issuecomment-724944415


   @ayushtkn can you please take a look at this change?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl merged pull request #2448: HDFS-15607. Addendum: Create trash dir when allowing snapshottable dir

2020-11-10 Thread GitBox


smengcl merged pull request #2448:
URL: https://github.com/apache/hadoop/pull/2448


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] smengcl commented on pull request #2448: HDFS-15607. Addendum: Create trash dir when allowing snapshottable dir

2020-11-10 Thread GitBox


smengcl commented on pull request #2448:
URL: https://github.com/apache/hadoop/pull/2448#issuecomment-724934005


   > LGTM. We change the exception thrown by disallowSnapshot. But this is a 
privileged operation so very unlikely to impact client applications.
   
   Yes indeed. Also this is a rare and have mild consequences. I wouldn't 
expect admins to encounter this daily. :)
   
   Thanks for the +1. Will commit shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed

2020-11-10 Thread GitBox


amahussein opened a new pull request #2456:
URL: https://github.com/apache/hadoop/pull/2456


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17372) S3A AWS Credential provider loading gets confused with isolated classloaders

2020-11-10 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229483#comment-17229483
 ] 

Steve Loughran commented on HADOOP-17372:
-

SPARK-9206 is related-ish, as it shows where the hive classloader was tweaked 
to allow gcs classes through

> S3A AWS Credential provider loading gets confused with isolated classloaders
> 
>
> Key: HADOOP-17372
> URL: https://issues.apache.org/jira/browse/HADOOP-17372
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> Problem: exception in loading S3A credentials for an FS, "Class class 
> com.amazonaws.auth.EnvironmentVariableCredentialsProvider does not implement 
> AWSCredentialsProvider"
> Location: S3A + Spark dataframes test
> Hypothesised cause:
> Configuration.getClasses() uses the context classloader, and with the spark 
> isolated CL that's different from the one the s3a FS uses, so it can't load 
> AWS credential providers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17096) ZStandardCompressor throws java.lang.InternalError: Error (generic)

2020-11-10 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17096.
--
Fix Version/s: 3.1.5
   3.4.0
   3.3.1
   3.2.2
   Resolution: Fixed

Great fix. Thanks.
I'm really sorry it took so long to review it.

> ZStandardCompressor throws java.lang.InternalError: Error (generic)
> ---
>
> Key: HADOOP-17096
> URL: https://issues.apache.org/jira/browse/HADOOP-17096
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 3.2.1
> Environment: Our repro is on ubuntu xenial LTS, with hadoop 3.2.1 
> linking to libzstd 1.3.1. The bug is difficult to reproduce in an end-to-end 
> environment (eg running an actual hadoop job with zstd compression) because 
> it's very sensitive to the exact input and output characteristics. I 
> reproduced the bug by turning one of the existing unit tests into a crude 
> fuzzer, but I'm not sure upstream will accept that patch, so I've attached it 
> separately on this ticket.
> Note that the existing unit test for testCompressingWithOneByteOutputBuffer 
> fails to reproduce this bug. This is because it's using the license file as 
> input, and this file is too small. libzstd has internal buffering (in our 
> environment it seems to be 128 kilobytes), and the license file is only 10 
> kilobytes. Thus libzstd is able to consume all the input and compress it in a 
> single call, then return pieces of its internal buffer one byte at a time. 
> Since all the input is consumed in a single call, uncompressedDirectBufOff 
> and uncompressedDirectBufLen are both set to zero and thus the bug does not 
> reproduce.
>Reporter: Stephen Jung (Stripe)
>Assignee: Stephen Jung (Stripe)
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: fuzztest.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A bug in index handling causes ZStandardCompressor.c to pass a malformed 
> ZSTD_inBuffer to libzstd. libzstd then returns an "Error (generic)" that gets 
> thrown. The crux of the issue is two variables, uncompressedDirectBufLen and 
> uncompressedDirectBufOff. The hadoop code counts uncompressedDirectBufOff 
> from the start of uncompressedDirectBuf, then uncompressedDirectBufLen is 
> counted from uncompressedDirectBufOff. However, libzstd considers pos and 
> size to both be counted from the start of the buffer. As a result, this line 
> https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L228
>  causes a malformed buffer to be passed to libzstd, where pos>size. Here's a 
> longer description of the bug in case this abstract explanation is unclear:
> 
> Suppose we initialize uncompressedDirectBuf (via setInputFromSavedData) with 
> five bytes of input. This results in uncompressedDirectBufOff=0 and 
> uncompressedDirectBufLen=5 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.java#L140-L146).
> Then we call compress(), which initializes a ZSTD_inBuffer 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L195-L196).
>  The definition of those libzstd structs is here 
> https://github.com/facebook/zstd/blob/v1.3.1/lib/zstd.h#L251-L261 - note that 
> we set size=uncompressedDirectBufLen and pos=uncompressedDirectBufOff. The 
> ZSTD_inBuffer gets passed to libzstd, compression happens, etc. When libzstd 
> returns from the compression function, it updates the ZSTD_inBuffer struct to 
> indicate how many bytes were consumed 
> (https://github.com/facebook/zstd/blob/v1.3.1/lib/compress/zstd_compress.c#L3919-L3920).
>  Note that pos is advanced, but size is unchanged.
> Now, libzstd does not guarantee that the entire input will be compressed in a 
> single call of the compression function. (Some of the compression libraries 
> used by hadoop, such as snappy, _do_ provide this guarantee, but libzstd is 
> not one of them.) So the hadoop native code updates uncompressedDirectBufOff 
> and uncompressedDirectBufLen using the updated ZSTD_inBuffer: 
> https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L227-L228
> Now, returning to our example, we started with 5 bytes of uncompressed input. 
> Suppose libzstd compressed 4 of those bytes, leaving one unread. This 

[jira] [Commented] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2020-11-10 Thread Yongjun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229470#comment-17229470
 ] 

Yongjun Zhang commented on HADOOP-17338:


HI [~ste...@apache.org],

Sorry for the delay. I tried to address your comments and created a PR. See 
linked.

Below are the changes I made:
 # add finally block in closeStream() to set object to null, when 
wrapperStreamis set to null
 # changed LOG.debug to LOG.warn, because I think it's worthwhile to see the 
error when it happens. Wonder if it was set to debug due to too many of these 
logs?
 # added try block around wrappedStream.abort(), report the exception there if 
any and swallow it there, to possibly address HADOOP-17312 as you suggested. 

Would you please help taking a look again? Once you are ok with this diff, I 
will follow up with the integration tests.

Thanks.

> Intermittent S3AInputStream failures: Premature end of Content-Length 
> delimited message body etc
> 
>
> Key: HADOOP-17338
> URL: https://issues.apache.org/jira/browse/HADOOP-17338
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17338.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are seeing the following two kinds of intermittent exceptions when using 
> S3AInputSteam:
> 1.
> {code}
> Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: 
> Premature end of Content-Length delimited message body (expected: 156463674; 
> received: 150001089
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
> at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 15 more
> {code}
> 2.
> {code}
> Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596)
> at sun.security.ssl.InputRecord.read(InputRecord.java:532)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
> at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> at 
> 

[jira] [Work logged] (HADOOP-17096) ZStandardCompressor throws java.lang.InternalError: Error (generic)

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17096?focusedWorklogId=509899=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509899
 ]

ASF GitHub Bot logged work on HADOOP-17096:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 19:38
Start Date: 10/Nov/20 19:38
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #2104:
URL: https://github.com/apache/hadoop/pull/2104


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509899)
Remaining Estimate: 0h
Time Spent: 10m

> ZStandardCompressor throws java.lang.InternalError: Error (generic)
> ---
>
> Key: HADOOP-17096
> URL: https://issues.apache.org/jira/browse/HADOOP-17096
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 3.2.1
> Environment: Our repro is on ubuntu xenial LTS, with hadoop 3.2.1 
> linking to libzstd 1.3.1. The bug is difficult to reproduce in an end-to-end 
> environment (eg running an actual hadoop job with zstd compression) because 
> it's very sensitive to the exact input and output characteristics. I 
> reproduced the bug by turning one of the existing unit tests into a crude 
> fuzzer, but I'm not sure upstream will accept that patch, so I've attached it 
> separately on this ticket.
> Note that the existing unit test for testCompressingWithOneByteOutputBuffer 
> fails to reproduce this bug. This is because it's using the license file as 
> input, and this file is too small. libzstd has internal buffering (in our 
> environment it seems to be 128 kilobytes), and the license file is only 10 
> kilobytes. Thus libzstd is able to consume all the input and compress it in a 
> single call, then return pieces of its internal buffer one byte at a time. 
> Since all the input is consumed in a single call, uncompressedDirectBufOff 
> and uncompressedDirectBufLen are both set to zero and thus the bug does not 
> reproduce.
>Reporter: Stephen Jung (Stripe)
>Assignee: Stephen Jung (Stripe)
>Priority: Major
> Attachments: fuzztest.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A bug in index handling causes ZStandardCompressor.c to pass a malformed 
> ZSTD_inBuffer to libzstd. libzstd then returns an "Error (generic)" that gets 
> thrown. The crux of the issue is two variables, uncompressedDirectBufLen and 
> uncompressedDirectBufOff. The hadoop code counts uncompressedDirectBufOff 
> from the start of uncompressedDirectBuf, then uncompressedDirectBufLen is 
> counted from uncompressedDirectBufOff. However, libzstd considers pos and 
> size to both be counted from the start of the buffer. As a result, this line 
> https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L228
>  causes a malformed buffer to be passed to libzstd, where pos>size. Here's a 
> longer description of the bug in case this abstract explanation is unclear:
> 
> Suppose we initialize uncompressedDirectBuf (via setInputFromSavedData) with 
> five bytes of input. This results in uncompressedDirectBufOff=0 and 
> uncompressedDirectBufLen=5 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.java#L140-L146).
> Then we call compress(), which initializes a ZSTD_inBuffer 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L195-L196).
>  The definition of those libzstd structs is here 
> https://github.com/facebook/zstd/blob/v1.3.1/lib/zstd.h#L251-L261 - note that 
> we set size=uncompressedDirectBufLen and pos=uncompressedDirectBufOff. The 
> ZSTD_inBuffer gets passed to libzstd, compression happens, etc. When libzstd 
> returns from the compression function, it updates the ZSTD_inBuffer struct to 
> indicate how many bytes were consumed 
> (https://github.com/facebook/zstd/blob/v1.3.1/lib/compress/zstd_compress.c#L3919-L3920).
>  Note that pos is advanced, but size is unchanged.
> Now, libzstd does not guarantee that the entire input will be compressed in a 
> single call of the compression function. (Some of the compression libraries 
> used by hadoop, such as snappy, _do_ provide this guarantee, but libzstd is 
> 

[GitHub] [hadoop] jojochuang merged pull request #2104: HADOOP-17096. Fix ZStandardCompressor input buffer offset

2020-11-10 Thread GitBox


jojochuang merged pull request #2104:
URL: https://github.com/apache/hadoop/pull/2104


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17096) ZStandardCompressor throws java.lang.InternalError: Error (generic)

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17096:

Labels: pull-request-available  (was: )

> ZStandardCompressor throws java.lang.InternalError: Error (generic)
> ---
>
> Key: HADOOP-17096
> URL: https://issues.apache.org/jira/browse/HADOOP-17096
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 3.2.1
> Environment: Our repro is on ubuntu xenial LTS, with hadoop 3.2.1 
> linking to libzstd 1.3.1. The bug is difficult to reproduce in an end-to-end 
> environment (eg running an actual hadoop job with zstd compression) because 
> it's very sensitive to the exact input and output characteristics. I 
> reproduced the bug by turning one of the existing unit tests into a crude 
> fuzzer, but I'm not sure upstream will accept that patch, so I've attached it 
> separately on this ticket.
> Note that the existing unit test for testCompressingWithOneByteOutputBuffer 
> fails to reproduce this bug. This is because it's using the license file as 
> input, and this file is too small. libzstd has internal buffering (in our 
> environment it seems to be 128 kilobytes), and the license file is only 10 
> kilobytes. Thus libzstd is able to consume all the input and compress it in a 
> single call, then return pieces of its internal buffer one byte at a time. 
> Since all the input is consumed in a single call, uncompressedDirectBufOff 
> and uncompressedDirectBufLen are both set to zero and thus the bug does not 
> reproduce.
>Reporter: Stephen Jung (Stripe)
>Assignee: Stephen Jung (Stripe)
>Priority: Major
>  Labels: pull-request-available
> Attachments: fuzztest.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A bug in index handling causes ZStandardCompressor.c to pass a malformed 
> ZSTD_inBuffer to libzstd. libzstd then returns an "Error (generic)" that gets 
> thrown. The crux of the issue is two variables, uncompressedDirectBufLen and 
> uncompressedDirectBufOff. The hadoop code counts uncompressedDirectBufOff 
> from the start of uncompressedDirectBuf, then uncompressedDirectBufLen is 
> counted from uncompressedDirectBufOff. However, libzstd considers pos and 
> size to both be counted from the start of the buffer. As a result, this line 
> https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L228
>  causes a malformed buffer to be passed to libzstd, where pos>size. Here's a 
> longer description of the bug in case this abstract explanation is unclear:
> 
> Suppose we initialize uncompressedDirectBuf (via setInputFromSavedData) with 
> five bytes of input. This results in uncompressedDirectBufOff=0 and 
> uncompressedDirectBufLen=5 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.java#L140-L146).
> Then we call compress(), which initializes a ZSTD_inBuffer 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L195-L196).
>  The definition of those libzstd structs is here 
> https://github.com/facebook/zstd/blob/v1.3.1/lib/zstd.h#L251-L261 - note that 
> we set size=uncompressedDirectBufLen and pos=uncompressedDirectBufOff. The 
> ZSTD_inBuffer gets passed to libzstd, compression happens, etc. When libzstd 
> returns from the compression function, it updates the ZSTD_inBuffer struct to 
> indicate how many bytes were consumed 
> (https://github.com/facebook/zstd/blob/v1.3.1/lib/compress/zstd_compress.c#L3919-L3920).
>  Note that pos is advanced, but size is unchanged.
> Now, libzstd does not guarantee that the entire input will be compressed in a 
> single call of the compression function. (Some of the compression libraries 
> used by hadoop, such as snappy, _do_ provide this guarantee, but libzstd is 
> not one of them.) So the hadoop native code updates uncompressedDirectBufOff 
> and uncompressedDirectBufLen using the updated ZSTD_inBuffer: 
> https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L227-L228
> Now, returning to our example, we started with 5 bytes of uncompressed input. 
> Suppose libzstd compressed 4 of those bytes, leaving one unread. This would 
> result in a ZSTD_inBuffer struct with size=5 (unchanged) and pos=4 (four 
> bytes were consumed). The hadoop native code would then set 
> uncompressedDirectBufOff=4, but it would also set 

[jira] [Assigned] (HADOOP-17096) ZStandardCompressor throws java.lang.InternalError: Error (generic)

2020-11-10 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-17096:


Assignee: Stephen Jung (Stripe)

> ZStandardCompressor throws java.lang.InternalError: Error (generic)
> ---
>
> Key: HADOOP-17096
> URL: https://issues.apache.org/jira/browse/HADOOP-17096
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 3.2.1
> Environment: Our repro is on ubuntu xenial LTS, with hadoop 3.2.1 
> linking to libzstd 1.3.1. The bug is difficult to reproduce in an end-to-end 
> environment (eg running an actual hadoop job with zstd compression) because 
> it's very sensitive to the exact input and output characteristics. I 
> reproduced the bug by turning one of the existing unit tests into a crude 
> fuzzer, but I'm not sure upstream will accept that patch, so I've attached it 
> separately on this ticket.
> Note that the existing unit test for testCompressingWithOneByteOutputBuffer 
> fails to reproduce this bug. This is because it's using the license file as 
> input, and this file is too small. libzstd has internal buffering (in our 
> environment it seems to be 128 kilobytes), and the license file is only 10 
> kilobytes. Thus libzstd is able to consume all the input and compress it in a 
> single call, then return pieces of its internal buffer one byte at a time. 
> Since all the input is consumed in a single call, uncompressedDirectBufOff 
> and uncompressedDirectBufLen are both set to zero and thus the bug does not 
> reproduce.
>Reporter: Stephen Jung (Stripe)
>Assignee: Stephen Jung (Stripe)
>Priority: Major
> Attachments: fuzztest.patch
>
>
> A bug in index handling causes ZStandardCompressor.c to pass a malformed 
> ZSTD_inBuffer to libzstd. libzstd then returns an "Error (generic)" that gets 
> thrown. The crux of the issue is two variables, uncompressedDirectBufLen and 
> uncompressedDirectBufOff. The hadoop code counts uncompressedDirectBufOff 
> from the start of uncompressedDirectBuf, then uncompressedDirectBufLen is 
> counted from uncompressedDirectBufOff. However, libzstd considers pos and 
> size to both be counted from the start of the buffer. As a result, this line 
> https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L228
>  causes a malformed buffer to be passed to libzstd, where pos>size. Here's a 
> longer description of the bug in case this abstract explanation is unclear:
> 
> Suppose we initialize uncompressedDirectBuf (via setInputFromSavedData) with 
> five bytes of input. This results in uncompressedDirectBufOff=0 and 
> uncompressedDirectBufLen=5 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.java#L140-L146).
> Then we call compress(), which initializes a ZSTD_inBuffer 
> (https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L195-L196).
>  The definition of those libzstd structs is here 
> https://github.com/facebook/zstd/blob/v1.3.1/lib/zstd.h#L251-L261 - note that 
> we set size=uncompressedDirectBufLen and pos=uncompressedDirectBufOff. The 
> ZSTD_inBuffer gets passed to libzstd, compression happens, etc. When libzstd 
> returns from the compression function, it updates the ZSTD_inBuffer struct to 
> indicate how many bytes were consumed 
> (https://github.com/facebook/zstd/blob/v1.3.1/lib/compress/zstd_compress.c#L3919-L3920).
>  Note that pos is advanced, but size is unchanged.
> Now, libzstd does not guarantee that the entire input will be compressed in a 
> single call of the compression function. (Some of the compression libraries 
> used by hadoop, such as snappy, _do_ provide this guarantee, but libzstd is 
> not one of them.) So the hadoop native code updates uncompressedDirectBufOff 
> and uncompressedDirectBufLen using the updated ZSTD_inBuffer: 
> https://github.com/apache/hadoop/blob/rel/release-3.2.1/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardCompressor.c#L227-L228
> Now, returning to our example, we started with 5 bytes of uncompressed input. 
> Suppose libzstd compressed 4 of those bytes, leaving one unread. This would 
> result in a ZSTD_inBuffer struct with size=5 (unchanged) and pos=4 (four 
> bytes were consumed). The hadoop native code would then set 
> uncompressedDirectBufOff=4, but it would also set uncompressedDirectBufLen=1 
> (five minus four equals one).
> Since some of the input was not consumed, we 

[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=509898=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509898
 ]

ASF GitHub Bot logged work on HADOOP-17338:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 19:30
Start Date: 10/Nov/20 19:30
Worklog Time Spent: 10m 
  Work Description: yzhangal opened a new pull request #2455:
URL: https://github.com/apache/hadoop/pull/2455


   …Content-Length delimited message body etc
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509898)
Remaining Estimate: 0h
Time Spent: 10m

> Intermittent S3AInputStream failures: Premature end of Content-Length 
> delimited message body etc
> 
>
> Key: HADOOP-17338
> URL: https://issues.apache.org/jira/browse/HADOOP-17338
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
> Attachments: HADOOP-17338.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are seeing the following two kinds of intermittent exceptions when using 
> S3AInputSteam:
> 1.
> {code}
> Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: 
> Premature end of Content-Length delimited message body (expected: 156463674; 
> received: 150001089
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
> at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 15 more
> {code}
> 2.
> {code}
> Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596)
> at sun.security.ssl.InputRecord.read(InputRecord.java:532)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
> at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
> at 

[GitHub] [hadoop] yzhangal opened a new pull request #2455: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …

2020-11-10 Thread GitBox


yzhangal opened a new pull request #2455:
URL: https://github.com/apache/hadoop/pull/2455


   …Content-Length delimited message body etc
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17338:

Labels: pull-request-available  (was: )

> Intermittent S3AInputStream failures: Premature end of Content-Length 
> delimited message body etc
> 
>
> Key: HADOOP-17338
> URL: https://issues.apache.org/jira/browse/HADOOP-17338
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17338.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are seeing the following two kinds of intermittent exceptions when using 
> S3AInputSteam:
> 1.
> {code}
> Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: 
> Premature end of Content-Length delimited message body (expected: 156463674; 
> received: 150001089
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
> at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 15 more
> {code}
> 2.
> {code}
> Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596)
> at sun.security.ssl.InputRecord.read(InputRecord.java:532)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
> at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)

[GitHub] [hadoop] hadoop-yetus commented on pull request #2399: HADOOP-17318. Support concurrent S3A commit jobs with same app attempt ID.

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-724917048


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 55s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  18m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   1m 36s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  6s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  24m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  20m 13s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 53s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/6/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 4 new + 48 unchanged - 1 fixed = 52 total (was 
49)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/6/artifact/out/whitespace-eol.txt)
 |  The patch has 2 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   4m 44s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 37s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2399 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux 667abdab7b52 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4331c88352d |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 

[jira] [Work logged] (HADOOP-17318) S3A committer to support concurrent jobs with same app attempt ID & dest dir

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17318?focusedWorklogId=509897=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509897
 ]

ASF GitHub Bot logged work on HADOOP-17318:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 19:28
Start Date: 10/Nov/20 19:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2399:
URL: https://github.com/apache/hadoop/pull/2399#issuecomment-724917048


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 55s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  18m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   1m 36s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  6s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  24m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  20m 13s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 53s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/6/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 4 new + 48 unchanged - 1 fixed = 52 total (was 
49)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/6/artifact/out/whitespace-eol.txt)
 |  The patch has 2 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  16m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   4m 44s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 37s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 215m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2399/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2399 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux 667abdab7b52 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | 

[GitHub] [hadoop] jojochuang merged pull request #2426: YARN-10480. replace href tags with ng-href

2020-11-10 Thread GitBox


jojochuang merged pull request #2426:
URL: https://github.com/apache/hadoop/pull/2426


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2452: HADOOP-17370. Upgrade commons-compress to 1.20

2020-11-10 Thread GitBox


steveloughran commented on pull request #2452:
URL: https://github.com/apache/hadoop/pull/2452#issuecomment-724879870


   wasb failures are unrelated (http client upgrade); unsure about the others



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17370) Upgrade commons-compress to 1.20

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?focusedWorklogId=509866=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509866
 ]

ASF GitHub Bot logged work on HADOOP-17370:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 18:20
Start Date: 10/Nov/20 18:20
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2452:
URL: https://github.com/apache/hadoop/pull/2452#issuecomment-724879870


   wasb failures are unrelated (http client upgrade); unsure about the others



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509866)
Time Spent: 1h  (was: 50m)

> Upgrade commons-compress to 1.20
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2162: HDFS-15485. Fix outdated properties of JournalNode when performing rollback

2020-11-10 Thread GitBox


jojochuang commented on pull request #2162:
URL: https://github.com/apache/hadoop/pull/2162#issuecomment-724840995


   Thanks. Merged to trunk. I'll see if it makes sense to cherrypick to lower 
branches.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2162: HDFS-15485. Fix outdated properties of JournalNode when performing rollback

2020-11-10 Thread GitBox


jojochuang merged pull request #2162:
URL: https://github.com/apache/hadoop/pull/2162


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #2436: HADOOP-17358. Improve excessive reloading of Configurations

2020-11-10 Thread GitBox


jojochuang commented on a change in pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#discussion_r520727890



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
##
@@ -2893,7 +2909,7 @@ protected synchronized Properties getProps() {
 }
   }
 }
-return properties;
+return props;

Review comment:
   (I thought I posted a comment here, but somehow it got lost in github 
comments)
   returning this object isn't necessary, and obviously the callers don't use 
the return value.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17358) Improve excessive reloading of Configurations

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17358?focusedWorklogId=509829=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509829
 ]

ASF GitHub Bot logged work on HADOOP-17358:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 17:09
Start Date: 10/Nov/20 17:09
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #2436:
URL: https://github.com/apache/hadoop/pull/2436#discussion_r520727890



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
##
@@ -2893,7 +2909,7 @@ protected synchronized Properties getProps() {
 }
   }
 }
-return properties;
+return props;

Review comment:
   (I thought I posted a comment here, but somehow it got lost in github 
comments)
   returning this object isn't necessary, and obviously the callers don't use 
the return value.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509829)
Time Spent: 1.5h  (was: 1h 20m)

> Improve excessive reloading of Configurations
> -
>
> Key: HADOOP-17358
> URL: https://issues.apache.org/jira/browse/HADOOP-17358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> [~daryn] reported that adding a new resource to a conf forces a complete 
> reload of the conf instead of just loading the new resource. Instantiating a 
> {{SSLFactory}} adds a new resource for the ssl client/server file. Formerly 
> only the KMS client used the SSLFactory but now TLS/RPC uses it too.
> The reload is so costly that RM token cancellation falls behind by hours or 
> days. The accumulation of uncancelled tokens in the KMS rose from a few 
> thousand to hundreds of thousands which risks ZK scalability issues causing a 
> KMS outage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17370) Upgrade commons-compress to 1.20

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?focusedWorklogId=509827=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509827
 ]

ASF GitHub Bot logged work on HADOOP-17370:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 17:07
Start Date: 10/Nov/20 17:07
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2452:
URL: https://github.com/apache/hadoop/pull/2452#issuecomment-724838032


   There're quite a few UT failures with the message: 
"java.lang.OutOfMemoryError: unable to create new native thread". While I think 
it is not related, I kicked out the CI again just to be sure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509827)
Time Spent: 50m  (was: 40m)

> Upgrade commons-compress to 1.20
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2452: HADOOP-17370. Upgrade commons-compress to 1.20

2020-11-10 Thread GitBox


sunchao commented on pull request #2452:
URL: https://github.com/apache/hadoop/pull/2452#issuecomment-724838032


   There're quite a few UT failures with the message: 
"java.lang.OutOfMemoryError: unable to create new native thread". While I think 
it is not related, I kicked out the CI again just to be sure.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17324?focusedWorklogId=509822=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509822
 ]

ASF GitHub Bot logged work on HADOOP-17324:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 17:00
Start Date: 10/Nov/20 17:00
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2411:
URL: https://github.com/apache/hadoop/pull/2411#issuecomment-724833749


   Thanks @steveloughran ! yes please backport it to branch-3.3 as well.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509822)
Time Spent: 3h 50m  (was: 3h 40m)

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[GitHub] [hadoop] sunchao commented on pull request #2411: HADOOP-17324. Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread GitBox


sunchao commented on pull request #2411:
URL: https://github.com/apache/hadoop/pull/2411#issuecomment-724833749


   Thanks @steveloughran ! yes please backport it to branch-3.3 as well.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17370) Upgrade commons-compress to 1.20

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?focusedWorklogId=509810=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509810
 ]

ASF GitHub Bot logged work on HADOOP-17370:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 16:36
Start Date: 10/Nov/20 16:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2452:
URL: https://github.com/apache/hadoop/pull/2452#issuecomment-724819133


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  19m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |  27m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  23m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  25m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  23m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  25m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   8m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |  10m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 559m 30s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2452/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | -1 :x: |  asflicense  |   1m 23s | 
[/patch-asflicense-problems.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2452/1/artifact/out/patch-asflicense-problems.txt)
 |  The patch generated 17 ASF License warnings.  |
   |  |   | 843m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.tools.dynamometer.TestDynamometerInfra |
   |   | hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown 
|
   |   | hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown 
|
   |   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
|
   |   | hadoop.metrics2.source.TestJvmMetrics |
   |   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2452: HADOOP-17370. Upgrade commons-compress to 1.20

2020-11-10 Thread GitBox


hadoop-yetus commented on pull request #2452:
URL: https://github.com/apache/hadoop/pull/2452#issuecomment-724819133


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  19m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  mvnsite  |  27m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   7m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   7m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  23m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  25m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  23m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  25m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 14s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   8m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |  10m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 559m 30s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2452/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | -1 :x: |  asflicense  |   1m 23s | 
[/patch-asflicense-problems.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2452/1/artifact/out/patch-asflicense-problems.txt)
 |  The patch generated 17 ASF License warnings.  |
   |  |   | 843m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.tools.dynamometer.TestDynamometerInfra |
   |   | hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown 
|
   |   | hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown 
|
   |   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
|
   |   | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.datanode.TestDataNodeMXBean |
   |   | hadoop.hdfs.server.datanode.checker.TestDatasetVolumeCheckerFailures |
   |   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.mover.TestStorageMover |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRecovery |
   |   | 

[jira] [Work logged] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17324?focusedWorklogId=509783=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509783
 ]

ASF GitHub Bot logged work on HADOOP-17324:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 16:01
Start Date: 10/Nov/20 16:01
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2411:
URL: https://github.com/apache/hadoop/pull/2411#issuecomment-724796787


   committed to trunk. Is this needed on branch-3.3 too?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509783)
Time Spent: 3h 40m  (was: 3.5h)

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Work logged] (HADOOP-17362) Doing hadoop ls on Har file triggers too many RPC calls

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17362?focusedWorklogId=509784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509784
 ]

ASF GitHub Bot logged work on HADOOP-17362:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 16:01
Start Date: 10/Nov/20 16:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2444:
URL: https://github.com/apache/hadoop/pull/2444#issuecomment-723361091


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 17s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 15s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 13s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 200 unchanged 
- 2 fixed = 200 total (was 202)  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 21s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   9m 37s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2444/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 168m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestLdapGroupsMapping |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2444/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2444 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fafe0bf78f14 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4b312810ae0 |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 

[jira] [Work logged] (HADOOP-17324) Don't relocate org.bouncycastle in shaded client jars

2020-11-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17324?focusedWorklogId=509781=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-509781
 ]

ASF GitHub Bot logged work on HADOOP-17324:
---

Author: ASF GitHub Bot
Created on: 10/Nov/20 16:00
Start Date: 10/Nov/20 16:00
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2411:
URL: https://github.com/apache/hadoop/pull/2411


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 509781)
Time Spent: 3.5h  (was: 3h 20m)

> Don't relocate org.bouncycastle in shaded client jars
> -
>
> Key: HADOOP-17324
> URL: https://issues.apache.org/jira/browse/HADOOP-17324
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> When downstream apps depend on {{hadoop-client-api}}, 
> {{hadoop-client-runtime}} and {{hadoop-client-minicluster}}, it seems the 
> {{MiniYARNCluster}} could have issue because 
> {{org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException}}
>  is not in any of the above jars. 
> {code}
> Error:  Caused by: sbt.ForkMain$ForkError: java.lang.ClassNotFoundException: 
> org.apache.hadoop.shaded.org.bouncycastle.operator.OperatorCreationException
> Error:at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
> Error:at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> Error:at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:862)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
> Error:at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:353)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:127)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:488)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:109)
> Error:at 
> org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:321)
> Error:at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> Error:at 
> org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:94)
> Error:at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
> Error:at 
> org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
> Error:at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
> Error:at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
> Error:at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
> Error:at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
> Error:at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Error:at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Error:at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2444: HADOOP-17362. reduce RPC calls doing ls on HAR file

2020-11-10 Thread GitBox


hadoop-yetus removed a comment on pull request #2444:
URL: https://github.com/apache/hadoop/pull/2444#issuecomment-723361091


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  compile  |  17m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +0 :ok: |  spotbugs  |   2m 17s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 15s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javac  |  19m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  javac  |  17m 13s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 200 unchanged 
- 2 fixed = 200 total (was 202)  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10  |
   | +1 :green_heart: |  findbugs  |   2m 21s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   9m 37s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2444/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 168m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestLdapGroupsMapping |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2444/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2444 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fafe0bf78f14 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4b312810ae0 |
   | Default Java | Private Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9+11-Ubuntu-0ubuntu1.18.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_272-8u272-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2444/1/testReport/ |
   | Max. process+thread count | 1461 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2444/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.1.3 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
 

  1   2   >