[jira] [Commented] (HADOOP-15861) Move DelegationTokenIssuer to the right path

2018-10-16 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652973#comment-16652973
 ] 

Xiao Chen commented on HADOOP-15861:


Good catch. +1 pending

> Move DelegationTokenIssuer to the right path
> 
>
> Key: HADOOP-15861
> URL: https://issues.apache.org/jira/browse/HADOOP-15861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HADOOP-15861.001.patch
>
>
> The addendum path of HADOOP-14445 updated the package name of 
> DelegationTokenIssuer, but it didn't update the patch.
> It is currently under 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/org/apache/hadoop/security/token/DelegationTokenIssuer.java
> We should move it under
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DelegationTokenIssuer.java
> File this as a separate jira instead of muddling through HADOOP-14445.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Use DelegationTokenIssuer to create KMS delegation tokens that can authenticate to all KMS instances

2018-10-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652947#comment-16652947
 ] 

Wei-Chiu Chuang commented on HADOOP-14445:
--

Thanks [~daryn], [~xiaochen] for carrying this task to the finish line. I found 
an issue in the commits and I believe it's unintentional. Please move over to 
HADOOP-15861 to check out.

> Use DelegationTokenIssuer to create KMS delegation tokens that can 
> authenticate to all KMS instances
> 
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-14445-branch-2.8.002.patch, 
> HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, 
> HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, 
> HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, 
> HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, 
> HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, 
> HADOOP-14445.15.patch, HADOOP-14445.16.patch, HADOOP-14445.17.patch, 
> HADOOP-14445.18.patch, HADOOP-14445.19.patch, HADOOP-14445.20.patch, 
> HADOOP-14445.addemdum.patch, HADOOP-14445.branch-2.000.precommit.patch, 
> HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, 
> HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, 
> HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, 
> HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, 
> HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, 
> HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, 
> HADOOP-14445.branch-3.0.001.patch, HADOOP-14445.compat.patch, 
> HADOOP-14445.revert.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15861) Move DelegationTokenIssuer to the right path

2018-10-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15861:
-
Status: Patch Available  (was: Open)

Submit the patch for precommit check. I verified the patch compiles locally.

> Move DelegationTokenIssuer to the right path
> 
>
> Key: HADOOP-15861
> URL: https://issues.apache.org/jira/browse/HADOOP-15861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HADOOP-15861.001.patch
>
>
> The addendum path of HADOOP-14445 updated the package name of 
> DelegationTokenIssuer, but it didn't update the patch.
> It is currently under 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/org/apache/hadoop/security/token/DelegationTokenIssuer.java
> We should move it under
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DelegationTokenIssuer.java
> File this as a separate jira instead of muddling through HADOOP-14445.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15861) Move DelegationTokenIssuer to the right path

2018-10-16 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15861:
-
Attachment: HADOOP-15861.001.patch

> Move DelegationTokenIssuer to the right path
> 
>
> Key: HADOOP-15861
> URL: https://issues.apache.org/jira/browse/HADOOP-15861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HADOOP-15861.001.patch
>
>
> The addendum path of HADOOP-14445 updated the package name of 
> DelegationTokenIssuer, but it didn't update the patch.
> It is currently under 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/org/apache/hadoop/security/token/DelegationTokenIssuer.java
> We should move it under
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DelegationTokenIssuer.java
> File this as a separate jira instead of muddling through HADOOP-14445.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15861) Move DelegationTokenIssuer to the right path

2018-10-16 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-15861:


 Summary: Move DelegationTokenIssuer to the right path
 Key: HADOOP-15861
 URL: https://issues.apache.org/jira/browse/HADOOP-15861
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.2.0, 3.0.4, 3.1.2
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


The addendum path of HADOOP-14445 updated the package name of 
DelegationTokenIssuer, but it didn't update the patch.
It is currently under 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/org/apache/hadoop/security/token/DelegationTokenIssuer.java

We should move it under
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DelegationTokenIssuer.java

File this as a separate jira instead of muddling through HADOOP-14445.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-16 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652927#comment-16652927
 ] 

Sean Mackrory commented on HADOOP-15860:


Should be trivial to write a test that reproduces this, but for some reason 
integration tests aren't running for me anymore. Will need to debug my config...

However after tracing this through, I strongly suspect the period is going 
missing server-side. CC'ing [~tmarquardt], [~DanielZhou]. The period stays with 
the write request as far as I can trace it, and seems to immediately be missing 
from any list request. Can easily be reproduced as follows:

{code}
hadoop fs -mkdir ${ABFS_ROOT}/test.
hadoop fs -ls ${ABFS_ROOT}/ # test. will be missing, but test will be present
{code}

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652876#comment-16652876
 ] 

Ted Yu commented on HADOOP-15850:
-

I tried to add '-blocksperchunk 0' option when invoking DistCp:
{code}
2018-10-17 02:33:53,708 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob(416): New DistCp options: [-async, 
-blocksperchunk, 0, 
hdfs://localhost:34344/user/hbase/test-data/78931012-3303-fc71-e289-5a9726f1bfcc/data/default/test-1539743586635/2e17accd93f78be97c0f585e68f283d6/f/46480cbed054406c9ef52ff123729938_SeqId_205_,
 
hdfs://localhost:34344/user/hbase/test-data/78931012-3303-fc71-e289-5a9726f1bfcc/data/default/test-1539743586635/2e17accd93f78be97c0f585e68f283d6/f/7e3cc96eb3f7447cb4f925df947d1fa3_SeqId_205_,
 hdfs://localhost:34344/backupUT/backup_1539743624592]
{code}
Still encountered 'Inconsistent sequence file' error.

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v1.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15802) start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry

2018-10-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652813#comment-16652813
 ] 

Hudson commented on HADOOP-15802:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15236 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15236/])
HADOOP-15802. start-build-env.sh creates an invalid (aajisaka: rev 
e3342a1abaff71823ebd952baf24a6143e711b99)
* (edit) start-build-env.sh


> start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} 
> file entry
> ---
>
> Key: HADOOP-15802
> URL: https://issues.apache.org/jira/browse/HADOOP-15802
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
> Environment: Ubuntu 18.04 x86_64 running in a VM with 4 CPUs / 8 GBs 
> RAM / 128 GB disk.  
>Reporter: Jon Boone
>Assignee: Jon Boone
>Priority: Minor
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15802.001.patch
>
>
> In my Ubuntu 18.04 dev VM, I cloned the hadoop repo and ran the 
> start-build-env.sh script.  Once the docker build was completed and the 
> container running, I tried to sudo and it failed.  Upon investigation, I 
> discovered that it was creating an entry in 
> /etc/sudoers.d/hadoop-build-${USER_ID} that contained the characters '\t' 
> rather than a tab.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15802) start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry

2018-10-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15802:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.2
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.2, and branch-3.1. Thanks [~jonBoone] for the 
contribution!

> start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} 
> file entry
> ---
>
> Key: HADOOP-15802
> URL: https://issues.apache.org/jira/browse/HADOOP-15802
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
> Environment: Ubuntu 18.04 x86_64 running in a VM with 4 CPUs / 8 GBs 
> RAM / 128 GB disk.  
>Reporter: Jon Boone
>Assignee: Jon Boone
>Priority: Minor
> Fix For: 3.2.0, 3.1.2
>
> Attachments: HADOOP-15802.001.patch
>
>
> In my Ubuntu 18.04 dev VM, I cloned the hadoop repo and ran the 
> start-build-env.sh script.  Once the docker build was completed and the 
> container running, I tried to sudo and it failed.  Upon investigation, I 
> discovered that it was creating an entry in 
> /etc/sudoers.d/hadoop-build-${USER_ID} that contained the characters '\t' 
> rather than a tab.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15802) start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry

2018-10-16 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-15802:
--

Assignee: Jon Boone

> start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} 
> file entry
> ---
>
> Key: HADOOP-15802
> URL: https://issues.apache.org/jira/browse/HADOOP-15802
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
> Environment: Ubuntu 18.04 x86_64 running in a VM with 4 CPUs / 8 GBs 
> RAM / 128 GB disk.  
>Reporter: Jon Boone
>Assignee: Jon Boone
>Priority: Minor
> Attachments: HADOOP-15802.001.patch
>
>
> In my Ubuntu 18.04 dev VM, I cloned the hadoop repo and ran the 
> start-build-env.sh script.  Once the docker build was completed and the 
> container running, I tried to sudo and it failed.  Upon investigation, I 
> discovered that it was creating an entry in 
> /etc/sudoers.d/hadoop-build-${USER_ID} that contained the characters '\t' 
> rather than a tab.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15802) start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} file entry

2018-10-16 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652778#comment-16652778
 ] 

Akira Ajisaka commented on HADOOP-15802:


+1, nice catch!

> start-build-env.sh creates an invalid /etc/sudoers.d/hadoop-build-${USER_ID} 
> file entry
> ---
>
> Key: HADOOP-15802
> URL: https://issues.apache.org/jira/browse/HADOOP-15802
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
> Environment: Ubuntu 18.04 x86_64 running in a VM with 4 CPUs / 8 GBs 
> RAM / 128 GB disk.  
>Reporter: Jon Boone
>Priority: Minor
> Attachments: HADOOP-15802.001.patch
>
>
> In my Ubuntu 18.04 dev VM, I cloned the hadoop repo and ran the 
> start-build-env.sh script.  Once the docker build was completed and the 
> container running, I tried to sudo and it failed.  Upon investigation, I 
> discovered that it was creating an entry in 
> /etc/sudoers.d/hadoop-build-${USER_ID} that contained the characters '\t' 
> rather than a tab.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14603) S3A input stream to support ByteBufferReadable

2018-10-16 Thread Keith Godwin Chapman (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652776#comment-16652776
 ] 

Keith Godwin Chapman commented on HADOOP-14603:
---

Sorry [~ste...@apache.org] I missed your comment too :). Doesn't this need 
support from the aws sdk though?

> S3A input stream to support ByteBufferReadable
> --
>
> Key: HADOOP-14603
> URL: https://issues.apache.org/jira/browse/HADOOP-14603
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> S3AInputStream could support {{ByteBufferReadable, 
> HasEnhancedByteBufferAccess}} and the operations to read into byte buffers.
> This is only if we can see a clear performance benefit from doing this or the 
> API is being more broadly used



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652765#comment-16652765
 ] 

Ted Yu commented on HADOOP-15850:
-

The DistCpOptions instance for the DistCp session is not passed to 
CopyCommitter.
If we have per chunk information, {{concatFileChunks}} call should depend on 
its value.

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v1.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652736#comment-16652736
 ] 

Ted Yu commented on HADOOP-15850:
-

[~jojochuang]:
See the link to MapReduceBackupCopyJob.java in my first comment .
We invoke DistCp programmatically.

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v1.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652726#comment-16652726
 ] 

Ted Yu commented on HADOOP-15850:
-

Running the backup test against hadoop 3.0.x / 3.1.y , this is easily 
reproducible.

I was aware of HADOOP-11794 and wondering why the per chunk feature kicks in.

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v1.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652723#comment-16652723
 ] 

Wei-Chiu Chuang edited comment on HADOOP-15850 at 10/17/18 12:28 AM:
-

Ted, if I understand it correctly (I have limited experience with distcp and 
hbase backup)
You're hitting a bug hidden in HADOOP-11794. You don't seem to be using 
-blocksperchunk feature, but the implementation thinks you do.

For the context, HADOOP-11794 allows distcp to copy a source file to multiple 
chunked intermediate files, and then "stitch" them together to the final file. 
This is a useful feature for storage systems that have long latency but high 
throughput.

Is it reproducible for HBase on Hadoop 3.1.1?


was (Author: jojochuang):
Ted, if I understand it correctly (I have limited experience with distcp and 
hbase backup)
You're hitting a bug hidden in HADOOP-11794. You don't seem to be using 
-blocksperchunk feature, but the implementation thinks you do.

For the context, HADOOP-11794 allows distcp to copy a source file to multiple 
chunked intermediate files, and then "stitch" them together to the final file. 
This is a useful feature for storage systems that have long latency but high 
throughput.

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v1.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--

[jira] [Commented] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652723#comment-16652723
 ] 

Wei-Chiu Chuang commented on HADOOP-15850:
--

Ted, if I understand it correctly (I have limited experience with distcp and 
hbase backup)
You're hitting a bug hidden in HADOOP-11794. You don't seem to be using 
-blocksperchunk feature, but the implementation thinks you do.

For the context, HADOOP-11794 allows distcp to copy a source file to multiple 
chunked intermediate files, and then "stitch" them together to the final file. 
This is a useful feature for storage systems that have long latency but high 
throughput.

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v1.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-15850:

Attachment: HADOOP-15850.v1.patch

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: HADOOP-15850.v1.patch, 
> testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-15850:

Description: 
I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
hbase against hadoop 3.1.1

hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
{code}
LOG.debug("creating input listing " + listing + " , totalRecords=" + 
totalRecords);
cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
totalRecords);
{code}
For the test case, two bulk loaded hfiles are in the listing:
{code}
2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 2 
files of 10242
{code}
Later on, CopyCommitter#concatFileChunks would throw the following exception:
{code}
2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
job_local1795473782_0004
java.io.IOException: Inconsistent sequence file: current chunk file 
org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
   
160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
 length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
   
2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
 length = 5142 aclEntries = null, xAttrs = null}
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
  at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
{code}
The above warning shouldn't happen - the two bulk loaded hfiles are independent.

>From the contents of the two CopyListingFileStatus instances, we can see that 
>their isSplit() return false. Otherwise the following from toString should be 
>logged:
{code}
if (isSplit()) {
  sb.append(", chunkOffset = ").append(this.getChunkOffset());
  sb.append(", chunkLength = ").append(this.getChunkLength());
}
{code}
>From hbase side, we can specify one bulk loaded hfile per job but that defeats 
>the purpose of using DistCp.



  was:
I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
hbase against hadoop 3.1.1

hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
{code}
LOG.debug("creating input listing " + listing + " , totalRecords=" + 
totalRecords);
cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
totalRecords);
{code}
For the test case, two bulk loaded hfiles are in the listing:
{code}
2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 2 
files of 10242
{code}
Later on, CopyCommitter#concatFileChunks would throw the following exception:
{code}
2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
job_local1795473782_0004
java.io.IOException: Inconsistent sequence file: current chunk file 
org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
   
160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
 length = 5100 

[jira] [Updated] (HADOOP-15850) CopyCommitter#concatFileChunks should check that the source file to be merged is a split

2018-10-16 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-15850:

Summary: CopyCommitter#concatFileChunks should check that the source file 
to be merged is a split  (was: Allow CopyCommitter to skip concatenating source 
files specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH)

> CopyCommitter#concatFileChunks should check that the source file to be merged 
> is a split
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString should be 
> logged:
> {code}
> if (isSplit()) {
>   sb.append(", chunkOffset = ").append(this.getChunkOffset());
>   sb.append(", chunkLength = ").append(this.getChunkLength());
> }
> {code}
> From hbase side, we can specify one bulk loaded hfile per job but that 
> defeats the purpose of using DistCp.
> There should be a way for DistCp to specify the skipping of source file 
> concatenation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15850) Allow CopyCommitter to skip concatenating source files specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH

2018-10-16 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652693#comment-16652693
 ] 

Ted Yu commented on HADOOP-15850:
-

I wonder if the check for mismatching FileStatus should be refined this way:
{code}
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
 b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/ma
index 07eacb0..6177454 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
@@ -266,9 +266,9 @@ private void concatFileChunks(Configuration conf) throws 
IOException {
 // Two neighboring chunks have to be consecutive ones for the same
 // file, for them to be merged
 if (!srcFileStatus.getPath().equals(lastFileStatus.getPath()) ||
-srcFileStatus.getChunkOffset() !=
+lastFileStatus.isSplit() && (srcFileStatus.getChunkOffset() !=
 (lastFileStatus.getChunkOffset() +
-lastFileStatus.getChunkLength())) {
+lastFileStatus.getChunkLength( {
   String emsg = "Inconsistent sequence file: current " +
   "chunk file " + srcFileStatus + " doesnt match prior " +
   "entry " + lastFileStatus;
{code}
The additional clause checks that lastFileStatus represents a split.

[~ste...@apache.org] [~yzhangal] [~jojochuang] 
What do you think ?

> Allow CopyCommitter to skip concatenating source files specified by 
> DistCpConstants.CONF_LABEL_LISTING_FILE_PATH
> 
>
> Key: HADOOP-15850
> URL: https://issues.apache.org/jira/browse/HADOOP-15850
> Project: Hadoop Common
>  Issue Type: Task
>  Components: tools/distcp
>Affects Versions: 3.1.1
>Reporter: Ted Yu
>Priority: Major
> Attachments: testIncrementalBackupWithBulkLoad-output.txt
>
>
> I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
> hbase against hadoop 3.1.1
> hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
> {code}
> LOG.debug("creating input listing " + listing + " , totalRecords=" + 
> totalRecords);
> cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
> cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
> totalRecords);
> {code}
> For the test case, two bulk loaded hfiles are in the listing:
> {code}
> 2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
> hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
> 2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 
> 2 files of 10242
> {code}
> Later on, CopyCommitter#concatFileChunks would throw the following exception:
> {code}
> 2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
> job_local1795473782_0004
> java.io.IOException: Inconsistent sequence file: current chunk file 
> org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
>
> 160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
>  length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
> org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
>
> 2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
>  length = 5142 aclEntries = null, xAttrs = null}
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
>   at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
>   at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
> {code}
> The above warning shouldn't happen - the two bulk loaded hfiles are 
> independent.
> From the contents of the two CopyListingFileStatus instances, we can see that 
> their isSplit() return false. Otherwise the following from toString 

[jira] [Commented] (HADOOP-15823) ABFS: Stop requiring client ID and tenant ID for MSI

2018-10-16 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652688#comment-16652688
 ] 

Da Zhou commented on HADOOP-15823:
--

[~mackrorysd] I got the time to check the logs, the failed request you shared 
is caused by need the *blob permission*. I'm wondering if there is any 
configuration issues.. But since it works for you before setting the 
tenantid/client id to empty, it is really weird. 
I am trying to set create and test MSI by myself, it might take sometime since 
I am not familiar with this, hopefully I can give a update soon.

> ABFS: Stop requiring client ID and tenant ID for MSI
> 
>
> Key: HADOOP-15823
> URL: https://issues.apache.org/jira/browse/HADOOP-15823
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15823-001.patch, HADOOP-15823-002.patch
>
>
> ABFS requires the user to configure the tenant ID and client ID. From my 
> understanding of MSI, that shouldn't be necessary and is an added requirement 
> compared to MSI in ADLS. Can that be dropped?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) Trailing period in file names gets ignored for some operations.

2018-10-16 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Affects Version/s: 3.2.0

> Trailing period in file names gets ignored for some operations.
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) Trailing period in file names gets ignored for some operations.

2018-10-16 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Component/s: fs/adl

> Trailing period in file names gets ignored for some operations.
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Trailing period in file names gets ignored for some operations.

2018-10-16 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Summary: ABFS: Trailing period in file names gets ignored for some 
operations.  (was: Trailing period in file names gets ignored for some 
operations.)

> ABFS: Trailing period in file names gets ignored for some operations.
> -
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) Trailing period in file names gets ignored for some operations.

2018-10-16 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15860:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15763

> Trailing period in file names gets ignored for some operations.
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15860) Trailing period in file names gets ignored for some operations.

2018-10-16 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-15860:
--

 Summary: Trailing period in file names gets ignored for some 
operations.
 Key: HADOOP-15860
 URL: https://issues.apache.org/jira/browse/HADOOP-15860
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory
Assignee: Sean Mackrory


If you create a directory with a trailing period (e.g. '/test.') the period is 
silently dropped, and will be listed as simply '/test'. '/test.test' appears to 
work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15841) ABFS: change createRemoteFileSystemDuringInitialization default to true

2018-10-16 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652636#comment-16652636
 ] 

Sean Mackrory commented on HADOOP-15841:


My main motivation here is that without a way for people to create 
ABFS-compatible containers in the portal, I've been having to set this to true 
*a lot*. If / when that can be done, then I would agree.

> ABFS: change createRemoteFileSystemDuringInitialization default to true
> ---
>
> Key: HADOOP-15841
> URL: https://issues.apache.org/jira/browse/HADOOP-15841
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Major
>
> I haven't seen a way to create a working container (at least for the dfs 
> endpoint) except for setting 
> fs.azure.createRemoteFileSystemDuringInitialization=true. I personally don't 
> see that much of a downside to having it default to true, and it's a mild 
> inconvenience to remember to set it to true for some action to create a 
> container. I vaguely recall [~tmarquardt] considering changing this default 
> too.
> I propose we do it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652495#comment-16652495
 ] 

Hadoop QA commented on HADOOP-15859:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944206/HADOOP-15859.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux c39192ea1aa1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d59ca43 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15381/testReport/ |
| Max. process+thread count | 1362 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15381/console |
| 

[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652427#comment-16652427
 ] 

Hadoop QA commented on HADOOP-14556:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 37 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
22s{color} | {color:green} root generated 0 new + 1326 unchanged - 1 fixed = 
1326 total (was 1327) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 29s{color} | {color:orange} root: The patch generated 20 new + 185 unchanged 
- 8 fixed = 205 total (was 193) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 136 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
13s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
38s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 

[jira] [Updated] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-16 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15859:

Status: Patch Available  (was: Open)

Attached a patch that removes the JNI setting of the remaining field per Ben's 
analysis above and cleans up the naming re: objects vs. classes in the JNI 
function arguments.

> ZStandardDecompressor.c mistakes a class for an instance
> 
>
> Key: HADOOP-15859
> URL: https://issues.apache.org/jira/browse/HADOOP-15859
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2, 2.9.0
>Reporter: Ben Lau
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15859.001.patch
>
>
> As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
> and still encountered segfaults in the JVM in HBase after that fix. 
> I took a deeper look and realized there is still another bug, which looks 
> like it's that we are actually [calling 
> setInt()|https://github.com/apache/hadoop/blob/f13e231025333ebf80b30bbdce1296cef554943b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
>  on the "remaining" variable on the ZStandardDecompressor class itself 
> (instead of an instance of that class) because the Java stub for the native C 
> init() function [is marked 
> static|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
>  leading to memory corruption and a crash during GC later.
> Initially I thought we would fix this by changing the Java init() method to 
> be non-static, but it looks like the "remaining" setInt() call is actually 
> unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
> "remaining" to 0 right after calling the JNI init() 
> call|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
>  So ZStandardDecompressor.java init() doesn't have to be changed to an 
> instance method, we can leave it as static, but remove the JNI init() call's 
> "remaining" setInt() call altogether.
> Furthermore we should probably clean up the class/instance distinction in the 
> C file because that's what led to this confusion. There are some other 
> methods where the distinction is incorrect or ambiguous, we should fix them 
> to prevent this from happening again.
> I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
> similar problems and needs to be fixed too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-16 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15859:

Attachment: HADOOP-15859.001.patch

> ZStandardDecompressor.c mistakes a class for an instance
> 
>
> Key: HADOOP-15859
> URL: https://issues.apache.org/jira/browse/HADOOP-15859
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Ben Lau
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15859.001.patch
>
>
> As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
> and still encountered segfaults in the JVM in HBase after that fix. 
> I took a deeper look and realized there is still another bug, which looks 
> like it's that we are actually [calling 
> setInt()|https://github.com/apache/hadoop/blob/f13e231025333ebf80b30bbdce1296cef554943b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
>  on the "remaining" variable on the ZStandardDecompressor class itself 
> (instead of an instance of that class) because the Java stub for the native C 
> init() function [is marked 
> static|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
>  leading to memory corruption and a crash during GC later.
> Initially I thought we would fix this by changing the Java init() method to 
> be non-static, but it looks like the "remaining" setInt() call is actually 
> unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
> "remaining" to 0 right after calling the JNI init() 
> call|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
>  So ZStandardDecompressor.java init() doesn't have to be changed to an 
> instance method, we can leave it as static, but remove the JNI init() call's 
> "remaining" setInt() call altogether.
> Furthermore we should probably clean up the class/instance distinction in the 
> C file because that's what led to this confusion. There are some other 
> methods where the distinction is incorrect or ambiguous, we should fix them 
> to prevent this from happening again.
> I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
> similar problems and needs to be fixed too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-16 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15859:

Affects Version/s: 2.9.0
   3.0.0-alpha2
 Target Version/s: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2

> ZStandardDecompressor.c mistakes a class for an instance
> 
>
> Key: HADOOP-15859
> URL: https://issues.apache.org/jira/browse/HADOOP-15859
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Ben Lau
>Assignee: Jason Lowe
>Priority: Blocker
>
> As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
> and still encountered segfaults in the JVM in HBase after that fix. 
> I took a deeper look and realized there is still another bug, which looks 
> like it's that we are actually [calling 
> setInt()|https://github.com/apache/hadoop/blob/f13e231025333ebf80b30bbdce1296cef554943b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
>  on the "remaining" variable on the ZStandardDecompressor class itself 
> (instead of an instance of that class) because the Java stub for the native C 
> init() function [is marked 
> static|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
>  leading to memory corruption and a crash during GC later.
> Initially I thought we would fix this by changing the Java init() method to 
> be non-static, but it looks like the "remaining" setInt() call is actually 
> unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
> "remaining" to 0 right after calling the JNI init() 
> call|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
>  So ZStandardDecompressor.java init() doesn't have to be changed to an 
> instance method, we can leave it as static, but remove the JNI init() call's 
> "remaining" setInt() call altogether.
> Furthermore we should probably clean up the class/instance distinction in the 
> C file because that's what led to this confusion. There are some other 
> methods where the distinction is incorrect or ambiguous, we should fix them 
> to prevent this from happening again.
> I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
> similar problems and needs to be fixed too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15858) S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for custom s3 endpoint

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15858:

Priority: Minor  (was: Major)

> S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for 
> custom s3 endpoint
> -
>
> Key: HADOOP-15858
> URL: https://issues.apache.org/jira/browse/HADOOP-15858
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.1, 2.7.7
> Environment: Hadoop 2.7.7 and 2.9.1
> Java openJDK 8:  8u181-b13-0ubuntu0.18.04.1
> Swift S3 api
>  
>Reporter: antoine
>Priority: Minor
>
> I'm trying to connect to an internal swift server using S3A capability of 
> Hadoop. This server works using python boto, it contains one bucket name 
> {{test}} and in this bucket there's one file {{test.file}}
> So far it's been impossible for me to reach the server properly, each time I 
> try it either ignore fs.s3a.endpoint or treat it incorrectly.:
> site-core.xml:
> {quote}{{}}
>  {{}}
>  {{fs.s3a.access.key}}
>  {{mykey}}
>  {{}}
>  {{}}
>  {{fs.s3a.secret.key}}
>  {{mysecret}}
>  {{}}
>  {{}}
>  {{fs.s3a.endpoint}}
>  {{my.endpoint.fr:8080}}
>  {{}}
>  {{}}
>  {{fs.s3a.connection.ssl.enabled}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.path.style.access}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.impl}}
>  {{org.apache.hadoop.fs.s3a.S3AFileSystem}}
>  {{}}
>  {{}}
> {quote}
> To debug this issue, I've try using this tool: 
> [cloudstore|https://github.com/steveloughran/cloudstore/releases] which helps 
> debug Hadoop fs. 
>  When trying to list my bucket {{test}} : 
>  {{s3a://test/}}
> I can see that it's connecting to :
> [https://test.s3.amazonaws.com/]
> Meaning that it happen test to the original s3 server and ignoring my 
> previous settings.
> When trying to list my bucket {{test}} using this url : 
>  {{s3a://}}{{my.endpoint.fr:8080}}{{/test/}}
> I can see that it's connecting to :
> [https://my.endpoint.fr/]
> Meaning that it ignore the port I set up in fs.s3a.endpoint configuration 
> which of course doesn't work because my server is listening to port 8080.
> I've tried with {{fs.s3a.path.style.access}} to false it's pretty much the 
> same.
> I'm sorry if it's not a bug but any help or consideration would be very 
> appreciate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15858) S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for custom s3 endpoint

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-15858:
-

> S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for 
> custom s3 endpoint
> -
>
> Key: HADOOP-15858
> URL: https://issues.apache.org/jira/browse/HADOOP-15858
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.1, 2.7.7
> Environment: Hadoop 2.7.7 and 2.9.1
> Java openJDK 8:  8u181-b13-0ubuntu0.18.04.1
> Swift S3 api
>  
>Reporter: antoine
>Priority: Major
>
> I'm trying to connect to an internal swift server using S3A capability of 
> Hadoop. This server works using python boto, it contains one bucket name 
> {{test}} and in this bucket there's one file {{test.file}}
> So far it's been impossible for me to reach the server properly, each time I 
> try it either ignore fs.s3a.endpoint or treat it incorrectly.:
> site-core.xml:
> {quote}{{}}
>  {{}}
>  {{fs.s3a.access.key}}
>  {{mykey}}
>  {{}}
>  {{}}
>  {{fs.s3a.secret.key}}
>  {{mysecret}}
>  {{}}
>  {{}}
>  {{fs.s3a.endpoint}}
>  {{my.endpoint.fr:8080}}
>  {{}}
>  {{}}
>  {{fs.s3a.connection.ssl.enabled}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.path.style.access}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.impl}}
>  {{org.apache.hadoop.fs.s3a.S3AFileSystem}}
>  {{}}
>  {{}}
> {quote}
> To debug this issue, I've try using this tool: 
> [cloudstore|https://github.com/steveloughran/cloudstore/releases] which helps 
> debug Hadoop fs. 
>  When trying to list my bucket {{test}} : 
>  {{s3a://test/}}
> I can see that it's connecting to :
> [https://test.s3.amazonaws.com/]
> Meaning that it happen test to the original s3 server and ignoring my 
> previous settings.
> When trying to list my bucket {{test}} using this url : 
>  {{s3a://}}{{my.endpoint.fr:8080}}{{/test/}}
> I can see that it's connecting to :
> [https://my.endpoint.fr/]
> Meaning that it ignore the port I set up in fs.s3a.endpoint configuration 
> which of course doesn't work because my server is listening to port 8080.
> I've tried with {{fs.s3a.path.style.access}} to false it's pretty much the 
> same.
> I'm sorry if it's not a bug but any help or consideration would be very 
> appreciate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15858) S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for custom s3 endpoint

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15858.
-
Resolution: Invalid

> S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for 
> custom s3 endpoint
> -
>
> Key: HADOOP-15858
> URL: https://issues.apache.org/jira/browse/HADOOP-15858
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.1, 2.7.7
> Environment: Hadoop 2.7.7 and 2.9.1
> Java openJDK 8:  8u181-b13-0ubuntu0.18.04.1
> Swift S3 api
>  
>Reporter: antoine
>Priority: Major
>
> I'm trying to connect to an internal swift server using S3A capability of 
> Hadoop. This server works using python boto, it contains one bucket name 
> {{test}} and in this bucket there's one file {{test.file}}
> So far it's been impossible for me to reach the server properly, each time I 
> try it either ignore fs.s3a.endpoint or treat it incorrectly.:
> site-core.xml:
> {quote}{{}}
>  {{}}
>  {{fs.s3a.access.key}}
>  {{mykey}}
>  {{}}
>  {{}}
>  {{fs.s3a.secret.key}}
>  {{mysecret}}
>  {{}}
>  {{}}
>  {{fs.s3a.endpoint}}
>  {{my.endpoint.fr:8080}}
>  {{}}
>  {{}}
>  {{fs.s3a.connection.ssl.enabled}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.path.style.access}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.impl}}
>  {{org.apache.hadoop.fs.s3a.S3AFileSystem}}
>  {{}}
>  {{}}
> {quote}
> To debug this issue, I've try using this tool: 
> [cloudstore|https://github.com/steveloughran/cloudstore/releases] which helps 
> debug Hadoop fs. 
>  When trying to list my bucket {{test}} : 
>  {{s3a://test/}}
> I can see that it's connecting to :
> [https://test.s3.amazonaws.com/]
> Meaning that it happen test to the original s3 server and ignoring my 
> previous settings.
> When trying to list my bucket {{test}} using this url : 
>  {{s3a://}}{{my.endpoint.fr:8080}}{{/test/}}
> I can see that it's connecting to :
> [https://my.endpoint.fr/]
> Meaning that it ignore the port I set up in fs.s3a.endpoint configuration 
> which of course doesn't work because my server is listening to port 8080.
> I've tried with {{fs.s3a.path.style.access}} to false it's pretty much the 
> same.
> I'm sorry if it's not a bug but any help or consideration would be very 
> appreciate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15858) S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for custom s3 endpoint

2018-10-16 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652277#comment-16652277
 ] 

Steve Loughran commented on HADOOP-15858:
-

no worries :)

FWIW, the debug tool I use for printing out the settings is 
[storediag|https://github.com/steveloughran/cloudstore/releases]; see how it 
helps

> S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for 
> custom s3 endpoint
> -
>
> Key: HADOOP-15858
> URL: https://issues.apache.org/jira/browse/HADOOP-15858
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.1, 2.7.7
> Environment: Hadoop 2.7.7 and 2.9.1
> Java openJDK 8:  8u181-b13-0ubuntu0.18.04.1
> Swift S3 api
>  
>Reporter: antoine
>Priority: Major
>
> I'm trying to connect to an internal swift server using S3A capability of 
> Hadoop. This server works using python boto, it contains one bucket name 
> {{test}} and in this bucket there's one file {{test.file}}
> So far it's been impossible for me to reach the server properly, each time I 
> try it either ignore fs.s3a.endpoint or treat it incorrectly.:
> site-core.xml:
> {quote}{{}}
>  {{}}
>  {{fs.s3a.access.key}}
>  {{mykey}}
>  {{}}
>  {{}}
>  {{fs.s3a.secret.key}}
>  {{mysecret}}
>  {{}}
>  {{}}
>  {{fs.s3a.endpoint}}
>  {{my.endpoint.fr:8080}}
>  {{}}
>  {{}}
>  {{fs.s3a.connection.ssl.enabled}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.path.style.access}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.impl}}
>  {{org.apache.hadoop.fs.s3a.S3AFileSystem}}
>  {{}}
>  {{}}
> {quote}
> To debug this issue, I've try using this tool: 
> [cloudstore|https://github.com/steveloughran/cloudstore/releases] which helps 
> debug Hadoop fs. 
>  When trying to list my bucket {{test}} : 
>  {{s3a://test/}}
> I can see that it's connecting to :
> [https://test.s3.amazonaws.com/]
> Meaning that it happen test to the original s3 server and ignoring my 
> previous settings.
> When trying to list my bucket {{test}} using this url : 
>  {{s3a://}}{{my.endpoint.fr:8080}}{{/test/}}
> I can see that it's connecting to :
> [https://my.endpoint.fr/]
> Meaning that it ignore the port I set up in fs.s3a.endpoint configuration 
> which of course doesn't work because my server is listening to port 8080.
> I've tried with {{fs.s3a.path.style.access}} to false it's pretty much the 
> same.
> I'm sorry if it's not a bug but any help or consideration would be very 
> appreciate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652265#comment-16652265
 ] 

Gabor Bota commented on HADOOP-15848:
-

You are right [~ste...@apache.org], thanks for the fix. 

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-16 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652257#comment-16652257
 ] 

Hudson commented on HADOOP-15826:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #15229 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15229/])
HADOOP-15826. @Retries annotation of putObject() call & uses wrong. (stevel: 
rev d59ca43bff8a457ce7ab62a61acd89aacbe71b93)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java


> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15826-001.patch, HADOOP-15826-002.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15826:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

LTGM +1, committed with joint credits

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-15826-001.patch, HADOOP-15826-002.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15848:

Affects Version/s: (was: 3.1.1)
   3.2.0

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15848:

   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16652251#comment-16652251
 ] 

Steve Loughran commented on HADOOP-15848:
-

+1 , committed to branch-3.2

Gabor: this is only one test case in the whole suite being missed, so no, not a 
pom. 

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Patch Available  (was: Open)

HADOOP-14556 patch 014
* instrumentation to track getDelegationToken() invocation count (new 
CommonStatisticNames field), and of tokens issued
* which are probed for in one of the tests
* DelegatedMR job cleaned up, now as much as I can see easily done.
* There's an origin string on token IDs to make it easier to debug; usually: 
(hostname, time);
*  when propagating session secrets this is noted in origin
* and for role tokens, the role ARN.
* checkstyle and minor cleanups

*This stuff is ready to play with —can anyone have a go?*

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Attachment: HADOOP-14556-014.patch

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14556) S3A to support Delegation Tokens

2018-10-16 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14556:

Status: Open  (was: Patch Available)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-16 Thread Ben Lau (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Lau updated HADOOP-15859:
-
Description: 
As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
and still encountered segfaults in the JVM in HBase after that fix. 

I took a deeper look and realized there is still another bug, which looks like 
it's that we are actually [calling 
setInt()|https://github.com/apache/hadoop/blob/f13e231025333ebf80b30bbdce1296cef554943b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
 on the "remaining" variable on the ZStandardDecompressor class itself (instead 
of an instance of that class) because the Java stub for the native C init() 
function [is marked 
static|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
 leading to memory corruption and a crash during GC later.

Initially I thought we would fix this by changing the Java init() method to be 
non-static, but it looks like the "remaining" setInt() call is actually 
unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
"remaining" to 0 right after calling the JNI init() 
call|https://github.com/apache/hadoop/blob/a0a276162147e843a5a4e028abdca5b66f5118da/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
 So ZStandardDecompressor.java init() doesn't have to be changed to an instance 
method, we can leave it as static, but remove the JNI init() call's "remaining" 
setInt() call altogether.

Furthermore we should probably clean up the class/instance distinction in the C 
file because that's what led to this confusion. There are some other methods 
where the distinction is incorrect or ambiguous, we should fix them to prevent 
this from happening again.

I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
similar problems and needs to be fixed too.

  was:
As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
and still encountered segfaults in the JVM in HBase after that fix. 

I took a deeper look and realized there is still another bug, which looks like 
it's that we are actually [calling 
setInt()|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
 on the "remaining" variable on the ZStandardDecompressor class itself (instead 
of an instance of that class) because the Java stub for the native C init() 
function [is marked 
static|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
 leading to memory corruption and a crash during GC later.

Initially I thought we would fix this by changing the Java init() method to be 
non-static, but it looks like the "remaining" setInt() call is actually 
unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
"remaining" to 0 right after calling the JNI init() 
call|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
 So ZStandardDecompressor.java init() doesn't have to be changed to an instance 
method, we can leave it as static, but remove the JNI init() call's "remaining" 
setInt() call altogether.

Furthermore we should probably clean up the class/instance distinction in the C 
file because that's what led to this confusion. There are some other methods 
where the distinction is incorrect or ambiguous, we should fix them to prevent 
this from happening again.

I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
similar problems and needs to be fixed too.


> ZStandardDecompressor.c mistakes a class for an instance
> 
>
> Key: HADOOP-15859
> URL: https://issues.apache.org/jira/browse/HADOOP-15859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ben Lau
>Assignee: Jason Lowe
>Priority: Blocker
>
> As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
> and still encountered segfaults in the JVM in HBase after that fix. 
> I took a deeper look and realized there is still another bug, which looks 
> like it's that we are actually [calling 
> setInt()|https://github.com/apache/hadoop/blob/f13e231025333ebf80b30bbdce1296cef554943b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
>  on the "remaining" variable on the 

[jira] [Assigned] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-16 Thread Jonathan Eagles (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles reassigned HADOOP-15859:


Assignee: Jason Lowe

> ZStandardDecompressor.c mistakes a class for an instance
> 
>
> Key: HADOOP-15859
> URL: https://issues.apache.org/jira/browse/HADOOP-15859
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ben Lau
>Assignee: Jason Lowe
>Priority: Blocker
>
> As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
> and still encountered segfaults in the JVM in HBase after that fix. 
> I took a deeper look and realized there is still another bug, which looks 
> like it's that we are actually [calling 
> setInt()|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
>  on the "remaining" variable on the ZStandardDecompressor class itself 
> (instead of an instance of that class) because the Java stub for the native C 
> init() function [is marked 
> static|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
>  leading to memory corruption and a crash during GC later.
> Initially I thought we would fix this by changing the Java init() method to 
> be non-static, but it looks like the "remaining" setInt() call is actually 
> unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
> "remaining" to 0 right after calling the JNI init() 
> call|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
>  So ZStandardDecompressor.java init() doesn't have to be changed to an 
> instance method, we can leave it as static, but remove the JNI init() call's 
> "remaining" setInt() call altogether.
> Furthermore we should probably clean up the class/instance distinction in the 
> C file because that's what led to this confusion. There are some other 
> methods where the distinction is incorrect or ambiguous, we should fix them 
> to prevent this from happening again.
> I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
> similar problems and needs to be fixed too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15859) ZStandardDecompressor.c mistakes a class for an instance

2018-10-16 Thread Ben Lau (JIRA)
Ben Lau created HADOOP-15859:


 Summary: ZStandardDecompressor.c mistakes a class for an instance
 Key: HADOOP-15859
 URL: https://issues.apache.org/jira/browse/HADOOP-15859
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ben Lau


As a follow up to HADOOP-15820, I was doing more testing on ZSTD compression 
and still encountered segfaults in the JVM in HBase after that fix. 

I took a deeper look and realized there is still another bug, which looks like 
it's that we are actually [calling 
setInt()|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.c#L148]
 on the "remaining" variable on the ZStandardDecompressor class itself (instead 
of an instance of that class) because the Java stub for the native C init() 
function [is marked 
static|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L253],
 leading to memory corruption and a crash during GC later.

Initially I thought we would fix this by changing the Java init() method to be 
non-static, but it looks like the "remaining" setInt() call is actually 
unnecessary anyway, because in ZStandardDecompressor.java's reset() we [set 
"remaining" to 0 right after calling the JNI init() 
call|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zstd/ZStandardDecompressor.java#L216].
 So ZStandardDecompressor.java init() doesn't have to be changed to an instance 
method, we can leave it as static, but remove the JNI init() call's "remaining" 
setInt() call altogether.

Furthermore we should probably clean up the class/instance distinction in the C 
file because that's what led to this confusion. There are some other methods 
where the distinction is incorrect or ambiguous, we should fix them to prevent 
this from happening again.

I talked to [~jlowe] who further pointed out the ZStandardCompressor also has 
similar problems and needs to be fixed too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15852) QuotaUsage Review

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651957#comment-16651957
 ] 

Hadoop QA commented on HADOOP-15852:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 11 unchanged - 1 fixed = 11 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944135/HADOOP-15852.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f065b413c642 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bf8a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15377/testReport/ |
| Max. process+thread count | 1355 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15377/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651943#comment-16651943
 ] 

Gabor Bota commented on HADOOP-15848:
-

Thanks for the patch [~ehiggs]! I think it would be better if we could skip the 
test in the pom.xml. What do you think?

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651907#comment-16651907
 ] 

Hadoop QA commented on HADOOP-15826:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
24s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944148/HADOOP-15826-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d669f4dab306 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bf8a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15379/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15379/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> @Retries annotation of putObject() call & uses wrong
> 

[jira] [Commented] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651905#comment-16651905
 ] 

Hadoop QA commented on HADOOP-15848:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
43s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15848 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944145/HADOOP-15848.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3604cbd1438b 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bf8a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15378/testReport/ |
| Max. process+thread count | 339 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15378/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: 

[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651846#comment-16651846
 ] 

Sunil Govindan commented on HADOOP-15857:
-

[~elek] [~jnp] Thanks for fixing last change as addendum.

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15857-branch-3.2.001.patch, 
> HADOOP-15857-branch-3.2.addendum.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-16 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-15826:

Status: Patch Available  (was: Open)

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15826-001.patch, HADOOP-15826-002.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-16 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-15826:

Status: Open  (was: Patch Available)

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15826-001.patch, HADOOP-15826-002.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-16 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-15826:

Attachment: HADOOP-15826-002.patch

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15826-001.patch, HADOOP-15826-002.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong

2018-10-16 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651762#comment-16651762
 ] 

Ewan Higgs commented on HADOOP-15826:
-

002
* aforementioned suggestion to tag revertCommit as OnceTranslated

> @Retries annotation of putObject() call & uses wrong
> 
>
> Key: HADOOP-15826
> URL: https://issues.apache.org/jira/browse/HADOOP-15826
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15826-001.patch, HADOOP-15826-002.patch
>
>
> The retry annotations of the S3AFilesystem putObject call and its 
> writeOperationsHelper use aren't in sync with what the code does.
> Fix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Ewan Higgs (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651752#comment-16651752
 ] 

Ewan Higgs commented on HADOOP-15848:
-

01
- Override base class impl and do nothing. (I tried tagging it with {{@Ignore}} 
but it didn't really seem to work.

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-15848:

Assignee: Ewan Higgs
  Status: Patch Available  (was: Open)

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Ewan Higgs (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HADOOP-15848:

Attachment: HADOOP-15848.01.patch

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651730#comment-16651730
 ] 

Elek, Marton commented on HADOOP-15857:
---

Oh, thanks [~jnp], you are right. Sorry, I missed it. I uploaded the addendum 
patch. Will commit it if no objections.

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15857-branch-3.2.001.patch, 
> HADOOP-15857-branch-3.2.addendum.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651731#comment-16651731
 ] 

Jitendra Nath Pandey commented on HADOOP-15857:
---

+1 for the addendum patch. Thanks for addressing this [~elek].

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15857-branch-3.2.001.patch, 
> HADOOP-15857-branch-3.2.addendum.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15857:
--
Attachment: HADOOP-15857-branch-3.2.addendum.patch

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15857-branch-3.2.001.patch, 
> HADOOP-15857-branch-3.2.addendum.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651711#comment-16651711
 ] 

Jitendra Nath Pandey commented on HADOOP-15857:
---

I think we should remove 

fs.AbstractFileSystem.o3.impl as well from branch-3.2. 

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651695#comment-16651695
 ] 

Elek, Marton commented on HADOOP-15857:
---

Thanks you very much [~sunilg] to include it in the release at the last 
minute...

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HADOOP-15857:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to branch-3.2

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 3.2.0
>
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651689#comment-16651689
 ] 

Sunil Govindan commented on HADOOP-15857:
-

Thanks. +1.

Committing shortly.

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15852) QuotaUsage Review

2018-10-16 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15852:
-
Status: Patch Available  (was: Open)

> QuotaUsage Review
> -
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch
>
>
> My new mission is to remove instances of {{StringBuffer}} in favor of 
> {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15852) QuotaUsage Review

2018-10-16 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15852:
-
Status: Open  (was: Patch Available)

> QuotaUsage Review
> -
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch
>
>
> My new mission is to remove instances of {{StringBuffer}} in favor of 
> {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15852) QuotaUsage Review

2018-10-16 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HADOOP-15852:
-
Attachment: HADOOP-15852.2.patch

> QuotaUsage Review
> -
>
> Key: HADOOP-15852
> URL: https://issues.apache.org/jira/browse/HADOOP-15852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch
>
>
> My new mission is to remove instances of {{StringBuffer}} in favor of 
> {{StringBuilder}}.
> * Simplify Code
> * Use Eclipse to generate hashcode/equals
> * User StringBuilder instead of StringBuffer



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651653#comment-16651653
 ] 

Hadoop QA commented on HADOOP-15857:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 3s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
51m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15857 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944109/HADOOP-15857-branch-3.2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 1f1c94aca702 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.2 / ced2596 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15376/testReport/ |
| Max. process+thread count | 1367 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15376/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>

[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651638#comment-16651638
 ] 

Hadoop QA commented on HADOOP-15616:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-cloud-storage-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
9s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-cloud-storage-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-cos in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-cloud-storage-project in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || 

[jira] [Resolved] (HADOOP-15858) S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for custom s3 endpoint

2018-10-16 Thread antoine (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

antoine resolved HADOOP-15858.
--
Resolution: Fixed

my mistake.

I wrote the worst site-core.xml file ever

> S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for 
> custom s3 endpoint
> -
>
> Key: HADOOP-15858
> URL: https://issues.apache.org/jira/browse/HADOOP-15858
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.9.1, 2.7.7
> Environment: Hadoop 2.7.7 and 2.9.1
> Java openJDK 8:  8u181-b13-0ubuntu0.18.04.1
> Swift S3 api
>  
>Reporter: antoine
>Priority: Major
>
> I'm trying to connect to an internal swift server using S3A capability of 
> Hadoop. This server works using python boto, it contains one bucket name 
> {{test}} and in this bucket there's one file {{test.file}}
> So far it's been impossible for me to reach the server properly, each time I 
> try it either ignore fs.s3a.endpoint or treat it incorrectly.:
> site-core.xml:
> {quote}{{}}
>  {{}}
>  {{fs.s3a.access.key}}
>  {{mykey}}
>  {{}}
>  {{}}
>  {{fs.s3a.secret.key}}
>  {{mysecret}}
>  {{}}
>  {{}}
>  {{fs.s3a.endpoint}}
>  {{my.endpoint.fr:8080}}
>  {{}}
>  {{}}
>  {{fs.s3a.connection.ssl.enabled}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.path.style.access}}
>  {{true}}
>  {{}}
>  {{}}
>  {{fs.s3a.impl}}
>  {{org.apache.hadoop.fs.s3a.S3AFileSystem}}
>  {{}}
>  {{}}
> {quote}
> To debug this issue, I've try using this tool: 
> [cloudstore|https://github.com/steveloughran/cloudstore/releases] which helps 
> debug Hadoop fs. 
>  When trying to list my bucket {{test}} : 
>  {{s3a://test/}}
> I can see that it's connecting to :
> [https://test.s3.amazonaws.com/]
> Meaning that it happen test to the original s3 server and ignoring my 
> previous settings.
> When trying to list my bucket {{test}} using this url : 
>  {{s3a://}}{{my.endpoint.fr:8080}}{{/test/}}
> I can see that it's connecting to :
> [https://my.endpoint.fr/]
> Meaning that it ignore the port I set up in fs.s3a.endpoint configuration 
> which of course doesn't work because my server is listening to port 8080.
> I've tried with {{fs.s3a.path.style.access}} to false it's pretty much the 
> same.
> I'm sorry if it's not a bug but any help or consideration would be very 
> appreciate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-16 Thread Boris Vulikh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651544#comment-16651544
 ] 

Boris Vulikh commented on HADOOP-15815:
---

The above build failed with below error message.
{code:none}
[INFO] --- maven-shade-plugin:2.4.3:shade (default) @ hadoop-client-minicluster 
---
...
[INFO] No artifact matching filter org.rocksdb:rocksdbjni
...
[INFO] Apache Hadoop Client Test Minicluster .. FAILURE [02:33 min]
...
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:2.4.3:shade (default) on project 
hadoop-client-minicluster: Error creating shaded jar: null: 
IllegalArgumentException -> [Help 1] [ERROR] 
{code}
It doesn't look related, however I prefer to rerun the build.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651516#comment-16651516
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-15857:
--

+1 the patch looks good.

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15858) S3A: property fs.s3a.endpoint is either ignored or treated incorrectly for custom s3 endpoint

2018-10-16 Thread antoine (JIRA)
antoine created HADOOP-15858:


 Summary: S3A: property fs.s3a.endpoint is either ignored or 
treated incorrectly for custom s3 endpoint
 Key: HADOOP-15858
 URL: https://issues.apache.org/jira/browse/HADOOP-15858
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.7, 2.9.1
 Environment: Hadoop 2.7.7 and 2.9.1

Java openJDK 8:  8u181-b13-0ubuntu0.18.04.1

Swift S3 api

 
Reporter: antoine


I'm trying to connect to an internal swift server using S3A capability of 
Hadoop. This server works using python boto, it contains one bucket name 
{{test}} and in this bucket there's one file {{test.file}}

So far it's been impossible for me to reach the server properly, each time I 
try it either ignore fs.s3a.endpoint or treat it incorrectly.:

site-core.xml:
{quote}{{}}
 {{}}
 {{fs.s3a.access.key}}
 {{mykey}}
 {{}}
 {{}}
 {{fs.s3a.secret.key}}
 {{mysecret}}
 {{}}
 {{}}
 {{fs.s3a.endpoint}}
 {{my.endpoint.fr:8080}}
 {{}}
 {{}}
 {{fs.s3a.connection.ssl.enabled}}
 {{true}}
 {{}}
 {{}}
 {{fs.s3a.path.style.access}}
 {{true}}
 {{}}
 {{}}
 {{fs.s3a.impl}}
 {{org.apache.hadoop.fs.s3a.S3AFileSystem}}
 {{}}
 {{}}
{quote}
To debug this issue, I've try using this tool: 
[cloudstore|https://github.com/steveloughran/cloudstore/releases] which helps 
debug Hadoop fs. 
 When trying to list my bucket {{test}} : 
 {{s3a://test/}}

I can see that it's connecting to :

[https://test.s3.amazonaws.com/]

Meaning that it happen test to the original s3 server and ignoring my previous 
settings.

When trying to list my bucket {{test}} using this url : 
 {{s3a://}}{{my.endpoint.fr:8080}}{{/test/}}

I can see that it's connecting to :

[https://my.endpoint.fr/]

Meaning that it ignore the port I set up in fs.s3a.endpoint configuration which 
of course doesn't work because my server is listening to port 8080.



I've tried with {{fs.s3a.path.style.access}} to false it's pretty much the same.

I'm sorry if it's not a bug but any help or consideration would be very 
appreciate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651482#comment-16651482
 ] 

Shashikant Banerjee commented on HADOOP-15857:
--

+1

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HADOOP-15857:

Priority: Blocker  (was: Major)

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15857:
--
Status: Patch Available  (was: Open)

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15857:
--
Attachment: HADOOP-15857-branch-3.2.001.patch

> Remove ozonefs class name definition from core-default.xml
> --
>
> Key: HADOOP-15857
> URL: https://issues.apache.org/jira/browse/HADOOP-15857
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HADOOP-15857-branch-3.2.001.patch
>
>
> Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
> branch-3.2 still contains a reference with o3://.
> The easiest way to fix it just remove the fs.o3.imp definition from 
> core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
> registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15857) Remove ozonefs class name definition from core-default.xml

2018-10-16 Thread Elek, Marton (JIRA)
Elek, Marton created HADOOP-15857:
-

 Summary: Remove ozonefs class name definition from core-default.xml
 Key: HADOOP-15857
 URL: https://issues.apache.org/jira/browse/HADOOP-15857
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Elek, Marton
Assignee: Elek, Marton


Ozone file system is under renaming in HDDS-651 from o3:// to o3fs://. But 
branch-3.2 still contains a reference with o3://.

The easiest way to fix it just remove the fs.o3.imp definition from 
core-default.xml from branch-3.2 as since HDDS-654 the file system could be 
registered with Service Provider Interface (META-INF/services...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651465#comment-16651465
 ] 

Hadoop QA commented on HADOOP-11100:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-11100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944073/HADOOP-11100.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7bf999e45bce 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bf8a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15373/testReport/ |
| Max. process+thread count | 1385 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15373/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.




[jira] [Updated] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation

2018-10-16 Thread YangY (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-15616:
---
Attachment: HADOOP-15616.005.patch

> Incorporate Tencent Cloud COS File System Implementation
> 
>
> Key: HADOOP-15616
> URL: https://issues.apache.org/jira/browse/HADOOP-15616
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/cos
>Reporter: Junping Du
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, 
> HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, 
> Tencent-COS-Integrated.pdf
>
>
> Tencent cloud is top 2 cloud vendors in China market and the object store COS 
> ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s 
> cloud users but now it is hard for hadoop user to access data laid on COS 
> storage as no native support for COS in Hadoop.
> This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just 
> like what we do before for S3, ADL, OSS, etc. With simple configuration, 
> Hadoop applications can read/write data from COS without any code change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-16 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651406#comment-16651406
 ] 

Hadoop QA commented on HADOOP-15815:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
26s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HADOOP-15815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12944069/HADOOP-15815.01-2.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 9b7ef539e663 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0bf8a11 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15374/testReport/ |
| Max. process+thread count | 400 (vs. ulimit of 1) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15374/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> 

[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout

2018-10-16 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651277#comment-16651277
 ] 

Adam Antal commented on HADOOP-11100:
-

Thanks [~xiaochen] for the valuable insight. I added only the fs.ftp.timeout 
config (defaulting to zero), as it is the current behaviour. (note that I 
removed fs.ftp.timeout.enabled since we don't need it). Also modified the 
core-default.xml and {{TestCommonConfigurationFields}} as you suggested.

Pending jenkins for v4.

> Support to configure  ftpClient.setControlKeepAliveTimeout 
> ---
>
> Key: HADOOP-11100
> URL: https://issues.apache.org/jira/browse/HADOOP-11100
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Krishnamoorthy Dharmalingam
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HADOOP-11100.002.patch, HADOOP-11100.003.patch, 
> HADOOP-11100.004.patch, HDFS-11000.001.patch
>
>
> In FTPFilesystem or Configuration, timeout is not possible to configure.
> It is very straight forward to configure, in FTPFilesystem.connect() method.
>  ftpClient.setControlKeepAliveTimeout
> Like
> private FTPClient connect() throws IOException {
> ...
> String timeout = conf.get("fs.ftp.timeout." + host);
> ...
>  ftpClient.setControlKeepAliveTimeout(new Integer(300));
> 
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout

2018-10-16 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-11100:

Attachment: HADOOP-11100.004.patch

> Support to configure  ftpClient.setControlKeepAliveTimeout 
> ---
>
> Key: HADOOP-11100
> URL: https://issues.apache.org/jira/browse/HADOOP-11100
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Krishnamoorthy Dharmalingam
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HADOOP-11100.002.patch, HADOOP-11100.003.patch, 
> HADOOP-11100.004.patch, HDFS-11000.001.patch
>
>
> In FTPFilesystem or Configuration, timeout is not possible to configure.
> It is very straight forward to configure, in FTPFilesystem.connect() method.
>  ftpClient.setControlKeepAliveTimeout
> Like
> private FTPClient connect() throws IOException {
> ...
> String timeout = conf.get("fs.ftp.timeout." + host);
> ...
>  ftpClient.setControlKeepAliveTimeout(new Integer(300));
> 
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-16 Thread Boris Vulikh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Vulikh updated HADOOP-15815:
--
Attachment: HADOOP-15815.01-2.patch

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01-2.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-16 Thread Boris Vulikh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Vulikh updated HADOOP-15815:
--
Attachment: (was: HADOOP-15815.01.patch)

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15815) Upgrade Eclipse Jetty version due to security concerns

2018-10-16 Thread Boris Vulikh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16651271#comment-16651271
 ] 

Boris Vulikh commented on HADOOP-15815:
---

The most severe issue solved in 9.3.25 is an [issue 2860: "Leakage of 
HttpDestinations in 
HttpClient"|https://github.com/eclipse/jetty.project/issues/2860].

I'll amend the patch to use 9.3.24.

> Upgrade Eclipse Jetty version due to security concerns
> --
>
> Key: HADOOP-15815
> URL: https://issues.apache.org/jira/browse/HADOOP-15815
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.1.1, 3.0.3
>Reporter: Boris Vulikh
>Assignee: Boris Vulikh
>Priority: Major
> Attachments: HADOOP-15815.01.patch
>
>
> * 
> [CVE-2017-7657|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7657]
>  * 
> [CVE-2017-7658|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7658]
>  * 
> [CVE-2017-7656|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-7656]
>  * 
> [CVE-2018-12536|https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-12536]
> We should upgrade the dependency to version 9.3.24 or the latest, if possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org