[jira] [Commented] (HADOOP-14775) Change junit dependency in parent pom file to junit 5 while maintaining backward compatibility to junit4.

2018-10-18 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656310#comment-16656310
 ] 

Takanobu Asanuma commented on HADOOP-14775:
---

Thanks for updating it, [~ajisakaa]. I'm going to run a QBT test with the patch 
in my local env and will commit it early next week If there is no problem.

> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 
> --
>
> Key: HADOOP-14775
> URL: https://issues.apache.org/jira/browse/HADOOP-14775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Ajay Kumar
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: junit5
> Attachments: HADOOP-14775.01.patch, HADOOP-14775.02.patch, 
> HADOOP-14775.03.patch, HADOOP-14775.04.patch, HADOOP-14775.05.patch, 
> HADOOP-14775.06.patch
>
>
> Change junit dependency in parent pom file to junit 5 while maintaining 
> backward compatibility to junit4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656253#comment-16656253
 ] 

Hadoop QA commented on HADOOP-15850:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 43 unchanged - 0 fixed = 44 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
53s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944651/HADOOP-15850.v5.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7ef2b22cdca6 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 13cc0f5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15392/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15392/testReport/ |
| Max. process+thread count | 443 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15392/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org 

[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656252#comment-16656252
 ] 

Íñigo Goiri commented on HADOOP-15821:
--

[~eyang] could you take a look at  [^HADOOP-15821.009.patch] and potentially 
commit if it looks good? 

> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch, HADOOP-15821.009.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656199#comment-16656199
 ] 

Ted Yu commented on HADOOP-15850:
-

Thanks for the review, looks like this bug could have been discovered sooner.

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, 
> HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-15850:

Attachment: HADOOP-15850.v5.patch

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, 
> HADOOP-15850.v4.patch, HADOOP-15850.v5.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656170#comment-16656170
 ] 

Wei-Chiu Chuang commented on HADOOP-15850:
--

I think the fix makes sense. In addition, you should remove the following line 
in TestCopyCommitter in order to test the fix. (Comment out the line, the tests 
fail without the fix)
{code:title=TestCopyCommitter.java}
// Unset listing file path since the config is shared by
// multiple tests, and some test doesn't set it, such as
// testNoCommitAction, but the distcp code will check it.
config.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, "");
{code}

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, 
> HADOOP-15850.v4.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656126#comment-16656126
 ] 

Hudson commented on HADOOP-15418:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15264 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15264/])
HADOOP-15418. Hadoop KMSAuthenticationFilter needs to use (weichiu: rev 
cd2158456db8c89eeea64b72654a736ea8607e23)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAuthenticationFilter.java
* (add) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAuthenticationFilter.java


> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch, 
> HADOOP-15418.3.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-15850:


Assignee: Ted Yu

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, 
> HADOOP-15850.v4.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656090#comment-16656090
 ] 

Wei-Chiu Chuang commented on HADOOP-15850:
--

I am hoping to review the patch later today, because I'm really interested in 
it. [~ste...@apache.org] please wait for me.

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, 
> HADOOP-15850.v4.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15418:
-
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed rev 3 to trunk. Thanks [~suma.shivaprasad] and lqjack

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch, 
> HADOOP-15418.3.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15418:
-
Attachment: HADOOP-15418.3.patch

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch, 
> HADOOP-15418.3.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656081#comment-16656081
 ] 

Wei-Chiu Chuang commented on HADOOP-15418:
--

+1. HADOOP-15411 made Configuration.getPropsWithPrefix() thread-safe, so use 
that API instead to avoid ConcurrentModificationException. (Although I don't 
think this is a problem in the context of KMS server).

Patch rev 002 has a trivial checkstyle warning. I can fix that and post a 
rev003 for posterity.

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656069#comment-16656069
 ] 

Hadoop QA commented on HADOOP-14556:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 39 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
25s{color} | {color:green} root generated 0 new + 1316 unchanged - 1 fixed = 
1316 total (was 1317) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 28 new + 185 unchanged 
- 8 fixed = 213 total (was 193) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 159 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged 
- 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
7s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
13s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Comment Edited] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655702#comment-16655702
 ] 

Da Zhou edited comment on HADOOP-15860 at 10/18/18 10:24 PM:
-

the list request response  from server contains the file/dir name but without 
trailing period, yes the fix is required on the service side.


was (Author: danielzhou):
the list request response  from server doesn't contain the data, yes the fix is 
required on the service side.

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655935#comment-16655935
 ] 

Steve Loughran commented on HADOOP-14556:
-

HADOOP-14556 patch 014
* Session & role tokens postpone reading of client side config options, 
building STS client  until token creation, so server-side deployments without 
the relevant options all work. (+tests)
* Writable/Serializable encryption methods class adds enum for client side too, 
version uid & checks in writable to verify it hasn't changed. Why so? 
Placeholder for client side. I know that's controversial but I don't want to 
box it out.
* using Optional over null in a few places. As usual, mixed feelings: we 
can't use map or foreach much because all our code throws IOEs.
* Tests: more, a base class with common methods for them
* Fixup bouncy castle classpath after latest yarn changes.

testing? Not right now. It's late.

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Patch Available  (was: Open)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Attachment: HADOOP-14556-015.patch

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-18 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Open  (was: Patch Available)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655885#comment-16655885
 ] 

Hadoop QA commented on HADOOP-15418:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
8s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15418 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944612/HADOOP-15418.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9736582c0338 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / beb850d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15390/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15390/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15390/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-18 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655813#comment-16655813
 ] 

Robert Kanter commented on HADOOP-15832:


[~leftnoteasy], sorry about that.  It should be fine now - YARN-8899 fixes it 
and has been committed.

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Suma Shivaprasad (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655800#comment-16655800
 ] 

Suma Shivaprasad commented on HADOOP-15418:
---

Fixed Checkstyle and ASF License.

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated HADOOP-15418:
--
Attachment: HADOOP-15418.2.patch

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: HADOOP-15418.1.patch, HADOOP-15418.2.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-18 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652688#comment-16652688
 ] 

Da Zhou edited comment on HADOOP-15823 at 10/18/18 7:08 PM:


[~mackrorysd] I got the time to check the logs, the failed request you shared 
is caused by need the authorization *permission*. I'm wondering if there is any 
configuration issues.. But since it works for you before setting the 
tenantid/client id to empty, it is really weird. 
 I am trying to set create and test MSI by myself, it might take sometime since 
I am not familiar with this, hopefully I can give a update soon.


was (Author: danielzhou):
[~mackrorysd] I got the time to check the logs, the failed request you shared 
is caused by need the *blob permission*. I'm wondering if there is any 
configuration issues.. But since it works for you before setting the 
tenantid/client id to empty, it is really weird. 
I am trying to set create and test MSI by myself, it might take sometime since 
I am not familiar with this, hopefully I can give a update soon.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-18 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655763#comment-16655763
 ] 

Wangda Tan commented on HADOOP-15832:
-

[~rkanter], should we revert the two problematic JIRAs while doing 
investigation? We should not break trunk for too long.

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655734#comment-16655734
 ] 

Hadoop QA commented on HADOOP-15418:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-common-project/hadoop-kms: The patch 
generated 10 new + 4 unchanged - 0 fixed = 14 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
58s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15418 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944579/HADOOP-15418.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b554b964c342 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ba7e816 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15389/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-kms.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15389/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15389/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 308 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 

[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1

2018-10-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655724#comment-16655724
 ] 

Íñigo Goiri commented on HADOOP-15483:
--

The RBF UI needed a couple more changes.
I posted them in HDFS-14005.

> Upgrade jquery to version 3.3.1
> ---
>
> Key: HADOOP-15483
> URL: https://issues.apache.org/jira/browse/HADOOP-15483
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HADOOP-15483-branch-3.1.001.patch, 
> HADOOP-15483.001.patch, HADOOP-15483.002.patch, HADOOP-15483.003.patch, 
> HADOOP-15483.004.patch, HADOOP-15483.005.patch, HADOOP-15483.006.patch, 
> HADOOP-15483.007.patch, HADOOP-15483.008.patch
>
>
> This Jira aims to upgrade jquery to version 3.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-18 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655720#comment-16655720
 ] 

Robert Kanter commented on HADOOP-15832:


I think I've figured out a more general solution by adding 
{{hadoop-yarn-server-web-proxy}} to the {{hadoop-minicluster}} pom.  I've 
posted a patch on YARN-8899; let's continue this discussion there.

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-18 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655708#comment-16655708
 ] 

Eric Yang edited comment on HADOOP-15832 at 10/18/18 6:26 PM:
--

When looking at mvn dependency:tree, it shows that:

{code}
[INFO] +- org.apache.hadoop:hadoop-minicluster:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  |  \- 
org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  | +- org.objenesis:objenesis:jar:1.0:test
[INFO] |  |  | \- de.ruedigermoeller:fst:jar:2.50:test
[INFO] |  |  |\- com.cedarsoftware:java-util:jar:1.9.0:test
[INFO] |  |  |   \- com.cedarsoftware:json-io:jar:2.5.1:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.3.0-SNAPSHOT:test
[INFO] |  | \- org.apache.commons:commons-csv:jar:1.0:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-mapreduce-client-common:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-app:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:3.3.0-SNAPSHOT:test
[INFO] |  \- 
org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.3.0-SNAPSHOT:test
{code}

Hadoop-yarn-server-web-proxy is included by hadoop-mapreduce-client-app, but 
bouncy castle jar files are excluded in hadoop-yarn-server-web-proxy project.  
If other project depends on minicluster, it will bring in client jars as well 
as minicluster jar file AND yarn-server-web-proxy jar file.  This cause unit 
tests that depends on minicluster to reference the non-shaded version of 
hadoop-yarn-server-web-proxy classes, but the transitive dependencies are 
missing.


was (Author: eyang):
When looking at mvn dependency:tree, it shows that:

{code}
[INFO] +- org.apache.hadoop:hadoop-minicluster:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  |  \- 
org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  | +- org.objenesis:objenesis:jar:1.0:test
[INFO] |  |  | \- de.ruedigermoeller:fst:jar:2.50:test
[INFO] |  |  |\- com.cedarsoftware:java-util:jar:1.9.0:test
[INFO] |  |  |   \- com.cedarsoftware:json-io:jar:2.5.1:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.3.0-SNAPSHOT:test
[INFO] |  | \- org.apache.commons:commons-csv:jar:1.0:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-mapreduce-client-common:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-app:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:3.3.0-SNAPSHOT:test
[INFO] |  \- 
org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.3.0-SNAPSHOT:test
{code}

Hadoop-yarn-server-web-proxy is included by hadoop-mapreduce-client-app, but 
bouncy castle jar files are excluded in hadoop-yarn-server-web-proxy project.  
If other project depends on minicluster, it will brought in client jars as well 
as minicluster jar file AND yarn-server-web-proxy jar file.  This cause unit 
tests that depends on minicluster to reference the non-shaded version of 
hadoop-yarn-server-web-proxy classes, but the transitive dependencies are 
missing.

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>

[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-18 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655708#comment-16655708
 ] 

Eric Yang commented on HADOOP-15832:


When looking at mvn dependency:tree, it shows that:

{code}
[INFO] +- org.apache.hadoop:hadoop-minicluster:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  |  \- 
org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  | +- org.objenesis:objenesis:jar:1.0:test
[INFO] |  |  | \- de.ruedigermoeller:fst:jar:2.50:test
[INFO] |  |  |\- com.cedarsoftware:java-util:jar:1.9.0:test
[INFO] |  |  |   \- com.cedarsoftware:json-io:jar:2.5.1:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.3.0-SNAPSHOT:test
[INFO] |  | \- org.apache.commons:commons-csv:jar:1.0:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.3.0-SNAPSHOT:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-mapreduce-client-common:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-app:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  +- 
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.3.0-SNAPSHOT:test
[INFO] |  |  \- 
org.apache.hadoop:hadoop-mapreduce-client-shuffle:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.3.0-SNAPSHOT:test
[INFO] |  +- 
org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:3.3.0-SNAPSHOT:test
[INFO] |  \- 
org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.3.0-SNAPSHOT:test
{code}

Hadoop-yarn-server-web-proxy is included by hadoop-mapreduce-client-app, but 
bouncy castle jar files are excluded in hadoop-yarn-server-web-proxy project.  
If other project depends on minicluster, it will brought in client jars as well 
as minicluster jar file AND yarn-server-web-proxy jar file.  This cause unit 
tests that depends on minicluster to reference the non-shaded version of 
hadoop-yarn-server-web-proxy classes, but the transitive dependencies are 
missing.

> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655702#comment-16655702
 ] 

Da Zhou commented on HADOOP-15860:
--

the list request response  from server doesn't contain the data, yes the fix is 
required on the service side.

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655623#comment-16655623
 ] 

Da Zhou commented on HADOOP-15860:
--

I verified this behavior and saw the same thing. I'm checking the code to see 
if it is a bug in driver or server.

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655623#comment-16655623
 ] 

Da Zhou edited comment on HADOOP-15860 at 10/18/18 5:23 PM:


Thanks [~mackrorysd], I verified this behavior and saw the same thing. I'm 
checking the code to see if it is a bug in driver or server.


was (Author: danielzhou):
I verified this behavior and saw the same thing. I'm checking the code to see 
if it is a bug in driver or server.

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15414) Job submit not work well on HDFS Federation with Transparent Encryption feature

2018-10-18 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655596#comment-16655596
 ] 

Xiao Chen commented on HADOOP-15414:


Thanks for following up [~ste...@apache.org]. I didn't test to verify my 
statement, but I believe HADOOP-14445 addresses this. [~hexiaoqiao] would you 
be able to verify?

> Job submit not work well on HDFS Federation with Transparent Encryption 
> feature
> ---
>
> Key: HADOOP-15414
> URL: https://issues.apache.org/jira/browse/HADOOP-15414
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15414-trunk.001.patch, 
> HADOOP-15414-trunk.002.patch
>
>
> When submit sample MapReduce job WordCount which read/write path under 
> encryption zone on HDFS Federation in security mode to YARN, task throws 
> exception as below:
> {code:java}
> 18/04/26 16:07:26 INFO mapreduce.Job: Task Id : attempt_JOBID_m_TASKID_0, 
> Status : FAILED
> Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:489)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1468)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1538)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:306)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
> at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:258)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:424)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:793)
> at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:823)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:332)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
> at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
> ... 21 more
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
> at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> at 
> 

[jira] [Updated] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated HADOOP-15418:
--
Status: Patch Available  (was: Open)

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: HADOOP-15418.1.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15418) Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of iterator to avoid ConcurrentModificationException

2018-10-18 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated HADOOP-15418:
--
Attachment: HADOOP-15418.1.patch

> Hadoop KMSAuthenticationFilter needs to use getPropsByPrefix instead of 
> iterator to avoid ConcurrentModificationException
> -
>
> Key: HADOOP-15418
> URL: https://issues.apache.org/jira/browse/HADOOP-15418
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: HADOOP-15418.1.patch
>
>
> The issue is similar to what was fixed in HADOOP-15411. Fixing this in 
> KMSAuthenticationFilter as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15414) Job submit not work well on HDFS Federation with Transparent Encryption feature

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655540#comment-16655540
 ] 

Hadoop QA commented on HADOOP-15414:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-15414 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15414 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921113/HADOOP-15414-trunk.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15388/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Job submit not work well on HDFS Federation with Transparent Encryption 
> feature
> ---
>
> Key: HADOOP-15414
> URL: https://issues.apache.org/jira/browse/HADOOP-15414
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15414-trunk.001.patch, 
> HADOOP-15414-trunk.002.patch
>
>
> When submit sample MapReduce job WordCount which read/write path under 
> encryption zone on HDFS Federation in security mode to YARN, task throws 
> exception as below:
> {code:java}
> 18/04/26 16:07:26 INFO mapreduce.Job: Task Id : attempt_JOBID_m_TASKID_0, 
> Status : FAILED
> Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:489)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1468)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1538)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:306)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
> at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:258)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:424)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:793)
> at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:823)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:332)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
> at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
> at 

[jira] [Commented] (HADOOP-15414) Job submit not work well on HDFS Federation with Transparent Encryption feature

2018-10-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655538#comment-16655538
 ] 

Steve Loughran commented on HADOOP-15414:
-

Does HADOOP-14445 address this?

> Job submit not work well on HDFS Federation with Transparent Encryption 
> feature
> ---
>
> Key: HADOOP-15414
> URL: https://issues.apache.org/jira/browse/HADOOP-15414
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: He Xiaoqiao
>Priority: Major
> Attachments: HADOOP-15414-trunk.001.patch, 
> HADOOP-15414-trunk.002.patch
>
>
> When submit sample MapReduce job WordCount which read/write path under 
> encryption zone on HDFS Federation in security mode to YARN, task throws 
> exception as below:
> {code:java}
> 18/04/26 16:07:26 INFO mapreduce.Job: Task Id : attempt_JOBID_m_TASKID_0, 
> Status : FAILED
> Error: java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:489)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1468)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1538)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:306)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
> at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
> at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:258)
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:424)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:793)
> at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:823)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:332)
> at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:205)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
> at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:483)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1690)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:478)
> ... 21 more
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
> at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> at 
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
> at 
> 

[jira] [Commented] (HADOOP-15821) Move Hadoop YARN Registry to Hadoop Registry

2018-10-18 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655532#comment-16655532
 ] 

Billie Rinaldi commented on HADOOP-15821:
-

LGTM as well. I tested patch 9 and verified that the packaging is working for 
me now.

> Move Hadoop YARN Registry to Hadoop Registry
> 
>
> Key: HADOOP-15821
> URL: https://issues.apache.org/jira/browse/HADOOP-15821
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HADOOP-15821.000.patch, HADOOP-15821.001.patch, 
> HADOOP-15821.002.patch, HADOOP-15821.003.patch, HADOOP-15821.004.patch, 
> HADOOP-15821.005.patch, HADOOP-15821.006.patch, HADOOP-15821.007.patch, 
> HADOOP-15821.008.patch, HADOOP-15821.009.patch
>
>
> Currently, Hadoop YARN Registry is in YARN. However, this can be used by 
> other parts of the project (e.g., HDFS). In addition, it does not have any 
> real dependency to YARN.
> We should move it into commons and make it Hadoop Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655513#comment-16655513
 ] 

Sean Mackrory edited comment on HADOOP-15860 at 10/18/18 4:25 PM:
--

Attaching a patch with a test that reproduces the issue. In this case, the 
behavior for files and directories is actually identical, so I suspect the 
nuances I was seeing was just a case of the FsShell not using the same APIs for 
similar operations on each. Note that in this test, the assertion that the file 
/ directory exists succeeds, but they are both missing from lists because the 
paths are missing periods. So this may just be a bug in listing - but again, it 
appears to be missing from the data as soon as I can read it in my debugger, so 
I think this requires a fix on the service side.


was (Author: mackrorysd):
Attaching a patch with a test that reproduces the issue. In this case, the 
behavior for files and directories is actually identical, so I suspect the 
nuances I was seeing was just a case of the FsShell not using the same APIs for 
similar operations on each. Note that in this test, the assertion that the file 
/ directory exists succeeds, but they are both missing from lists because the 
paths are missing periods. So this may just be a bug in listing.

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655526#comment-16655526
 ] 

Hadoop QA commented on HADOOP-15850:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 29 unchanged - 0 fixed = 30 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m  
3s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15850 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944559/HADOOP-15850.v4.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cdcdbb6b9968 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2202e00 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15387/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15387/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console 

[jira] [Assigned] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory reassigned HADOOP-15860:
--

Assignee: (was: Sean Mackrory)

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655513#comment-16655513
 ] 

Sean Mackrory commented on HADOOP-15860:


Attaching a patch with a test that reproduces the issue. In this case, the 
behavior for files and directories is actually identical, so I suspect the 
nuances I was seeing was just a case of the FsShell not using the same APIs for 
similar operations on each. Note that in this test, the assertion that the file 
/ directory exists succeeds, but they are both missing from lists because the 
paths are missing periods. So this may just be a bug in listing.

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Attachment: (was: azure-auth-keys.xml)

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Comment: was deleted

(was: Trying to write a test that reproduces all the nuances here. Weirdly, I 
can't get the integration tests to run. With "mvn -T 1C -Dparallel-tests 
-DtestsThreadCount=8 verify" I used to be able to run all serial & parallel 
ABFS & WASB unit and integration tests with the attached (sanitized) config, 
and now it doesn't seem to work. Possibly some Maven-fu I'm not getting... 
Looks like [~ste...@apache.org] has done most of the more recent committing - 
know what I'm missing?

edit: Oh! It's the =both. Nevermind...)

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Attachment: (was: azure-bfs-auth-keys.xml)

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: azure-auth-keys.xml, trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Attachment: trailing-periods.patch

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: azure-auth-keys.xml, trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-15850:

Attachment: HADOOP-15850.v4.patch

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, 
> HADOOP-15850.v4.patch, testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655252#comment-16655252
 ] 

Sean Mackrory edited comment on HADOOP-15860 at 10/18/18 2:44 PM:
--

Trying to write a test that reproduces all the nuances here. Weirdly, I can't 
get the integration tests to run. With "mvn -T 1C -Dparallel-tests 
-DtestsThreadCount=8 verify" I used to be able to run all serial & parallel 
ABFS & WASB unit and integration tests with the attached (sanitized) config, 
and now it doesn't seem to work. Possibly some Maven-fu I'm not getting... 
Looks like [~ste...@apache.org] has done most of the more recent committing - 
know what I'm missing?

edit: Oh! It's the =both. Nevermind...


was (Author: mackrorysd):
Trying to write a test that reproduces all the nuances here. Weirdly, I can't 
get the integration tests to run. With "mvn -T 1C -Dparallel-tests 
-DtestsThreadCount=8 verify" I used to be able to run all serial & parallel 
ABFS & WASB unit and integration tests with the attached (sanitized) config, 
and now it doesn't seem to work. Possibly some Maven-fu I'm not getting... 
Looks like [~ste...@apache.org] has done most of the more recent committing - 
know what I'm missing?

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: azure-auth-keys.xml, azure-bfs-auth-keys.xml
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Attachment: azure-bfs-auth-keys.xml
azure-auth-keys.xml

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
> Attachments: azure-auth-keys.xml, azure-bfs-auth-keys.xml
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-18 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655252#comment-16655252
 ] 

Sean Mackrory commented on HADOOP-15860:


Trying to write a test that reproduces all the nuances here. Weirdly, I can't 
get the integration tests to run. With "mvn -T 1C -Dparallel-tests 
-DtestsThreadCount=8 verify" I used to be able to run all serial & parallel 
ABFS & WASB unit and integration tests with the attached (sanitized) config, 
and now it doesn't seem to work. Possibly some Maven-fu I'm not getting... 
Looks like [~ste...@apache.org] has done most of the more recent committing - 
know what I'm missing?

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout

2018-10-18 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655067#comment-16655067
 ] 

Adam Antal commented on HADOOP-11100:
-

Thanks for the help and review [~knanasi] and [~xiaochen]!

> Support to configure ftpClient.setControlKeepAliveTimeout 
> --
>
> Key: HADOOP-11100
> URL: https://issues.apache.org/jira/browse/HADOOP-11100
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Krishnamoorthy Dharmalingam
>Assignee: Adam Antal
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-11100.002.patch, HADOOP-11100.003.patch, 
> HADOOP-11100.004.patch, HDFS-11000.001.patch
>
>
> In FTPFilesystem or Configuration, timeout is not possible to configure.
> It is very straight forward to configure, in FTPFilesystem.connect() method.
>  ftpClient.setControlKeepAliveTimeout
> Like
> private FTPClient connect() throws IOException {
> ...
> String timeout = conf.get("fs.ftp.timeout." + host);
> ...
>  ftpClient.setControlKeepAliveTimeout(new Integer(300));
> 
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60

2018-10-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16654985#comment-16654985
 ] 

Steve Loughran commented on HADOOP-15832:
-

I saw this happen yesterday with my HADOOP-14556 code too; somehow something 
changed since the previous week; let's assume YARN-8448 as its in my 
now-rebased branch. I'd assumed it was in my patch, not anything external.

Adding the bcpkik artifact @ test fixed this in hadoop-aws
{code}

  org.bouncycastle
  bcpkix-jdk15on
  test

{code}

And in test setup I added:
{code}
new OperatorCreationException("");
{code}

This adds a compile-time check on the classpath & so makes the issue surface 
early


> Upgrade BouncyCastle to 1.60
> 
>
> Key: HADOOP-15832
> URL: https://issues.apache.org/jira/browse/HADOOP-15832
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15832.001.patch, HADOOP-15832.addendum.patch
>
>
> As part of my work on YARN-6586, I noticed that we're using a very old 
> version of BouncyCastle:
> {code:xml}
> 
>org.bouncycastle
>bcprov-jdk16
>1.46
>test
> 
> {code}
> The *-jdk16 artifacts have been discontinued and are not recommended (see 
> [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]).
>  
>  In particular, the newest release, 1.46, is from {color:#FF}2011{color}! 
>  [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16]
> The currently maintained and recommended artifacts are *-jdk15on:
>  [https://www.bouncycastle.org/latest_releases.html]
>  They're currently on version 1.60, released only a few months ago.
> We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 
> release. It's currently a test-only artifact, so there should be no 
> backwards-compatibility issues with updating this. It's also needed for 
> YARN-6586, where we'll actually be shipping it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0

2018-10-18 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16654941#comment-16654941
 ] 

Steve Loughran commented on HADOOP-15850:
-

seems reasonable to me, but others who know more of DistCP need to look at it 
too.

one nit: CopyCommitter now uses SLF4J, so the log statement should be

{code}
 LOG.debug("blocks per chunk {}", blocksPerChunk)
{code}

> CopyCommitter#concatFileChunks should check that the blocks per chunk is not 0
> --
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v2.patch, HADOOP-15850.v3.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #430: 合并Master修改

2018-10-18 Thread ZanderXu
Github user ZanderXu closed the pull request at:

https://github.com/apache/hadoop/pull/430


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #430: 合并Master修改

2018-10-18 Thread ZanderXu
GitHub user ZanderXu opened a pull request:

https://github.com/apache/hadoop/pull/430

合并Master修改



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZanderXu/hadoop master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/430.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #430


commit 44dbddb02287d4b1cf6f531c58f215e64e4f447c
Author: Akira Ajisaka 
Date:   2015-06-25T15:20:12Z

HDFS-8462. Implement GETXATTRS and LISTXATTRS operations for 
WebImageViewer. Contributed by Jagadesh Kiran N.

(cherry picked from commit bc433908d35758ff0a7225cd6f5662959ef4d294)

commit 05e9ffdd6226aa7155f4c77709f8043067aed382
Author: Arpit Agarwal 
Date:   2015-06-25T17:13:22Z

HDFS-8640. Make reserved RBW space visible through JMX. (Contributed by 
kanaka kumar avvaru)

commit 31fd7a51d8850d140a9c5b9b2adbf06fcdc2a7f0
Author: Jason Lowe 
Date:   2015-06-25T19:50:07Z

MAPREDUCE-6413. TestLocalJobSubmission is failing with unknown host. 
Contributed by zhihai xu
(cherry picked from commit aa5b15b03be61ebb76a226e0de485d5228c8e3d0)

commit ba406b723d4b978d2dd02f35a2a8e20a812a3cee
Author: Andrew Wang 
Date:   2015-06-26T00:29:24Z

HDFS-8665. Fix replication check in DFSTestUtils#waitForReplication.

(cherry picked from commit ff0e5e572f5dcf7b49381cbe901360f6e171d423)

commit 84fdd4a3a0c3e0e696f8818134e188e306948686
Author: Andrew Wang 
Date:   2015-06-26T00:50:32Z

HDFS-8546. Use try with resources in DataStorage and Storage.

(cherry picked from commit 1403b84b122fb76ef2b085a728b5402c32499c1f)

commit 55427fb66c6d52ce98b4d68a29b592a734014c28
Author: Harsh J 
Date:   2012-09-23T10:37:52Z

HADOOP-8151. Error handling in snappy decompressor throws invalid 
exceptions. Contributed by Matt Foley. (harsh)

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1389006 
13f79535-47bb-0310-9956-ffa450edef68
(cherry picked from commit ac31d6a4485d7ff9074fb5dade7a6cf5292bb347)

Conflicts:

hadoop-common-project/hadoop-common/CHANGES.txt

commit 0221d19f4e398c386f4ca3990b0893562aa8dacf
Author: Jason Lowe 
Date:   2015-06-26T15:47:07Z

YARN-3850. NM fails to read files from full disks which can lead to 
container logs being lost and other issues. Contributed by Varun Saxena
(cherry picked from commit 40b256949ad6f6e0dbdd248f2d257b05899f4332)

commit 8552af91f4a44e16129ac41d4acbae0d444e570f
Author: Colin Patrick Mccabe 
Date:   2015-06-26T17:21:40Z

HDFS-8651. Make hadoop-hdfs-project Native code -Wall-clean (Alan Burlison 
via Colin P. McCabe)

(cherry picked from commit 1b764a01fd8010cf9660eb378977a1b2b81f330a)

commit 83d76151e279ebd3f11f7b342350816e7dca6d76
Author: Jing Zhao 
Date:   2015-06-26T17:49:01Z

HDFS-8623. Refactor NameNode handling of invalid, corrupt, and 
under-recovery blocks. Contributed by Zhe Zhang.

(cherry picked from commit de480d6c8945bd8b5b00e8657b7a72ce8dd9b6b5)

commit dd7776b2fe158abbe0626743612adca4ad08f581
Author: Andrew Wang 
Date:   2015-06-26T18:30:59Z

HDFS-8656. Preserve compatibility of ClientProtocol#rollingUpgrade after 
finalization.

(cherry picked from commit 60b858bfa65e0feb665e1a84784a3d45e9091c66)

commit 9cf5bd2fad94d1067dc01c47ae7e8eab50cf9d39
Author: Colin Patrick Mccabe 
Date:   2015-06-26T19:32:31Z

HADOOP-12036. Consolidate all of the cmake extensions in one directory 
(alanburlison via cmccabe)

(cherry picked from commit aa07dea3577158b92a17651d10da20df73f54561)

commit 1a8d162bc4c6da9291a39fcf6981e046e2d188d4
Author: Xuan 
Date:   2015-06-27T02:43:59Z

YARN-2871. TestRMRestart#testRMRestartGetApplicationList sometime fails
in trunk. Contributed by zhihai xu

(cherry picked from commit fe6c1bd73aee188ed58df4d33bbc2d2fe0779a97)

commit e163c1e0dabf2b8012cb6304351836f6dddb85a2
Author: Devaraj K 
Date:   2015-06-28T04:34:50Z

YARN-3859. LeafQueue doesn't print user properly for application add.
Contributed by Varun Saxena.

(cherry picked from commit b543d1a390a67e5e92fea67d3a2635058c29e9da)

commit f02c06965c7caecb64c56c65b03648e998643c67
Author: Steve Loughran 
Date:   2015-06-28T18:13:48Z

HADOOP-12009 Clarify FileSystem.listStatus() sorting order & fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel)

commit 0fd47fa8099d4c2efad19a253eabe3bbd2cb78de
Author: Arpit Agarwal 
Date:   2015-06-28T21:51:17Z

HDFS-8681. BlockScanner is incorrectly disabled by default. (Contributed by 
Arpit Agarwal)

commit 4155bb565efc6cfe4f0cb0d117d7875cd049ee0c
Author: Vinod Kumar Vavilapalli 
Date:   2015-06-28T23:29:12Z

Adding release 2.7.2 to CHANGES.txt.

(cherry picked from commit aad6a7d5dba5858d6e9845f18c4baec16c91911d)

commit 

[GitHub] hadoop pull request #429: 修改更新

2018-10-18 Thread ZanderXu
Github user ZanderXu closed the pull request at:

https://github.com/apache/hadoop/pull/429


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #429: 修改更新

2018-10-18 Thread ZanderXu
GitHub user ZanderXu opened a pull request:

https://github.com/apache/hadoop/pull/429

修改更新



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZanderXu/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/429.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #429


commit ef8cd5dc565f901b4954befe784675e130e84c3c
Author: Andrew Wang 
Date:   2017-09-15T23:20:36Z

HDFS-10701. TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired 
occasionally fails. Contributed by SammiChen.

commit 958e8c0e257216c82f68fee726e5280a919da94a
Author: Wangda Tan 
Date:   2017-09-16T04:24:11Z

YARN-6977. Node information is not provided for non am containers in RM 
logs. (Suma Shivaprasad via wangda)

Change-Id: I0c44d09a560446dee2ba68c2b9ae69fce0ec1d3e
(cherry picked from commit 8a42e922fad613f3cf1cc6cb0f3fa72546a9cc56)

commit 38c14ef8d8a094a7101917eb77d90f5e62324f61
Author: Wangda Tan 
Date:   2017-09-16T04:25:21Z

YARN-7149. Cross-queue preemption sometimes starves an underserved queue. 
(Eric Payne via wangda)

Change-Id: Ib269991dbebce160378e8372ee6d24849c4a5ed6
(cherry picked from commit 3dfa937a1fadfc62947755872515f549b3b15e6a)

commit 7618fa9194b40454405f11a25bec4e2d79506912
Author: Daniel Templeton 
Date:   2017-09-16T07:20:33Z

HADOOP-13714. Tighten up our compatibility guidelines for Hadoop 3

commit 8d7cc22ac286302960c7939bc53574cbfeab1846
Author: Arpit Agarwal 
Date:   2017-09-16T17:09:27Z

HDFS-12472. Add JUNIT timeout to TestBlockStatsMXBean. Contributed by 
Bharat Viswanadham.

commit e81596d06d226f1cfa44b2390ce3095ed4dee621
Author: Wangda Tan 
Date:   2017-09-18T04:20:43Z

YARN-7172. ResourceCalculator.fitsIn() should not take a cluster resource 
parameter. (Sen Zhao via wangda)

Change-Id: Icc3670c9381ce7591ca69ec12da5aa52d3612d34

commit 0f9af246e89e4ad3c4d7ff2c1d7ec9b397494a03
Author: Kai Zheng 
Date:   2017-09-18T10:07:12Z

HDFS-12460. Make addErasureCodingPolicy an idempotent operation. 
Contributed by Sammi Chen

commit a4f9c7c9247801dd37beec6fc195622af1b884ad
Author: Jason Lowe 
Date:   2017-09-18T15:16:09Z

YARN-7192. Add a pluggable StateMachine Listener that is notified of NM 
Container State changes. Contributed by Arun Suresh

commit a2dcba18531c6fa4b76325f5132773f12ddfc6d5
Author: Arpit Agarwal 
Date:   2017-09-18T16:53:24Z

HDFS-12470. DiskBalancer: Some tests create plan files under system 
directory. Contributed by Hanisha Koneru.

commit 5f496683fb00ba26a6bf5a506ae87d4bc4088727
Author: Robert Kanter 
Date:   2017-09-18T17:32:08Z

Revert "YARN-7162. Remove XML excludes file format (rkanter)" - wrong 
commit message

This reverts commit 3a8d57a0a2e047b34be82f602a2b6cf5593d2125.

commit 0adc0471d0c06f66a31060f270dcb50a7b4ffafa
Author: Robert Kanter 
Date:   2017-09-18T17:40:06Z

MAPREDUCE-6954. Disable erasure coding for files that are uploaded to the 
MR staging area (pbacsko via rkanter)

commit 29dd55153e37471d9c177f4bd173f1d02bc96410
Author: Arun Suresh 
Date:   2017-09-18T18:26:44Z

YARN-7199. Fix 
TestAMRMClientContainerRequest.testOpportunisticAndGuaranteedRequests. (Botong 
Huang via asuresh)

commit 7c732924a889cd280e972882619a1827877fbafa
Author: Xuan 
Date:   2017-09-18T21:04:05Z

YARN-6570. No logs were found for running application, running
container. Contributed by Junping Du

commit 1ee25278c891e95ba2ab142e5b78aebd752ea163
Author: Haibo Chen 
Date:   2017-09-18T21:25:35Z

HADOOP-14771. hadoop-client does not include hadoop-yarn-client. (Ajay 
Kumar via Haibo Chen)

commit b3d61304f2fa4a99526f7a60ccaac9f262083079
Author: Jason Lowe 
Date:   2017-09-18T22:04:43Z

MAPREDUCE-6958. Shuffle audit logger should log size of shuffle transfer. 
Contributed by Jason Lowe

commit 3cf3540f19b5fd1a174690db9f1b7be2977d96ba
Author: Andrew Wang 
Date:   2017-09-18T22:13:42Z

HADOOP-14835. mvn site build throws SAX errors. Contributed by Andrew Wang 
and Sean Mackrory.

commit 56ef5279c1db93d03b2f1e04badbfe804f548918
Author: Arun Suresh 
Date:   2017-09-18T22:49:31Z

YARN-7203. Add container ExecutionType into ContainerReport. (Botong Huang 
via asuresh)

commit 2018538fdba1a95a6556187569e872fce7f9e1c3
Author: Akira Ajisaka 
Date:   2017-09-19T02:05:54Z

MAPREDUCE-6947. Moving logging APIs over to slf4j in 
hadoop-mapreduce-examples. Contributed by Gergery Novák.

commit 31b58406ac369716ef1665b7d60a3409117bdf9d
Author: Brahma Reddy Battula 
Date:   2017-09-19T05:07:07Z

HDFS-12480. TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails 
in trunk. Contributed by Hanisha Koneru

commit fda1221c55101d97ac62e1ee4e3ddf9a915d5363
Author: Brahma Reddy Battula 
Date:   2017-09-19T05:55:45Z

HDFS-11799. Introduce a config to