[jira] [Commented] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-27 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780156#comment-16780156
 ] 

Akira Ajisaka commented on HADOOP-15958:


003: ignored license check

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-02-27 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15958:
---
Attachment: HADOOP-15958-003.patch

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-wip.001.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #526: HDDS-1183. Override getDelegationToken 
API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#issuecomment-468151852
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | +1 | mvninstall | 981 | trunk passed |
   | +1 | compile | 925 | trunk passed |
   | +1 | checkstyle | 190 | trunk passed |
   | -1 | mvnsite | 51 | common in trunk failed. |
   | +1 | shadedclient | 1065 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 36 | common in trunk failed. |
   | +1 | javadoc | 132 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 99 | the patch passed |
   | +1 | compile | 873 | the patch passed |
   | +1 | javac | 873 | the patch passed |
   | +1 | checkstyle | 183 | the patch passed |
   | +1 | mvnsite | 137 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 672 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 219 | the patch passed |
   | +1 | javadoc | 129 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 86 | common in the patch failed. |
   | +1 | unit | 49 | common in the patch passed. |
   | +1 | unit | 95 | ozonefs in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6256 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/526 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 980988c88033 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 538bb48 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/branch-mvnsite-hadoop-ozone_common.txt
 |
   | findbugs | v3.1.0-RC1 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/branch-findbugs-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/testReport/ |
   | Max. process+thread count | 2787 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/ozonefs 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16153) Allow TeraGen to use schema-specific output committer

2019-02-27 Thread Yifeng Jiang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifeng Jiang updated HADOOP-16153:
--
Attachment: HADOOP-16153.patch

> Allow TeraGen to use schema-specific output committer
> -
>
> Key: HADOOP-16153
> URL: https://issues.apache.org/jira/browse/HADOOP-16153
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Yifeng Jiang
>Priority: Trivial
> Attachments: HADOOP-16153.patch
>
>
> TeraGen is hard-coded to use FileOutputCommitter to commit the job. This 
> patch is to allow TeraGen to use schema-specific committer for optimization. 
> For example, using S3A committers for s3a:// object storage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16153) Allow TeraGen to use schema-specific output committer

2019-02-27 Thread Yifeng Jiang (JIRA)
Yifeng Jiang created HADOOP-16153:
-

 Summary: Allow TeraGen to use schema-specific output committer
 Key: HADOOP-16153
 URL: https://issues.apache.org/jira/browse/HADOOP-16153
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.1
Reporter: Yifeng Jiang


TeraGen is hard-coded to use FileOutputCommitter to commit the job. This patch 
is to allow TeraGen to use schema-specific committer for optimization. For 
example, using S3A committers for s3a:// object storage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #528: HDDS-1182. Pipeline Rule where atleast one datanode is reported in the pipeline.

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #528: HDDS-1182. Pipeline Rule where atleast 
one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528#issuecomment-468127128
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for branch |
   | +1 | mvninstall | 983 | trunk passed |
   | +1 | compile | 76 | trunk passed |
   | +1 | checkstyle | 32 | trunk passed |
   | +1 | mvnsite | 77 | trunk passed |
   | +1 | shadedclient | 760 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 110 | trunk passed |
   | +1 | javadoc | 61 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 67 | the patch passed |
   | +1 | javac | 67 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | +1 | mvnsite | 62 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 121 | the patch passed |
   | +1 | javadoc | 57 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 70 | common in the patch failed. |
   | +1 | unit | 137 | server-scm in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3593 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/528 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 561cb5a46c9b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1779fc5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/testReport/ |
   | Max. process+thread count | 543 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-528/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-27 Thread Genmao Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780040#comment-16780040
 ] 

Genmao Yu commented on HADOOP-15038:


some minor comments:
1. I like "METASTORE_LOCAL_ENTRY_TTL_MS" better than "METASTORE_LOCAL_ENTRY_TTL"

Besides, to be fully rigorous here: which service endpoint have you tested the 
latest patch against?



> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-27 Thread Genmao Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780040#comment-16780040
 ] 

Genmao Yu edited comment on HADOOP-15038 at 2/28/19 3:04 AM:
-

[~wujinhu] some minor comments:
1. I like "METASTORE_LOCAL_ENTRY_TTL_MS" better than "METASTORE_LOCAL_ENTRY_TTL"

Besides, to be fully rigorous here: which service endpoint have you tested the 
latest patch against?




was (Author: unclegen):
some minor comments:
1. I like "METASTORE_LOCAL_ENTRY_TTL_MS" better than "METASTORE_LOCAL_ENTRY_TTL"

Besides, to be fully rigorous here: which service endpoint have you tested the 
latest patch against?



> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 opened a new pull request #528: HDDS-1182. Pipeline Rule where atleast one datanode is reported in the pipeline.

2019-02-27 Thread GitBox
bharatviswa504 opened a new pull request #528: HDDS-1182. Pipeline Rule where 
atleast one datanode is reported in the pipeline.
URL: https://github.com/apache/hadoop/pull/528
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui 
is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468113722
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1042 | trunk passed |
   | +1 | compile | 72 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 65 | trunk passed |
   | +1 | shadedclient | 777 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 97 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 62 | the patch passed |
   | -1 | jshint | 75 | The patch generated 294 new + 1942 unchanged - 1053 
fixed = 2236 total (was 2995) |
   | +1 | compile | 66 | the patch passed |
   | +1 | javac | 66 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-hdds: The patch generated 12 new + 0 
unchanged - 0 fixed = 12 total (was 0) |
   | +1 | mvnsite | 54 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 766 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 109 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 87 | common in the patch failed. |
   | +1 | unit | 32 | framework in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3616 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  |
   | uname | Linux 37fa4eb47056 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cbf82fa |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/diff-patch-jshint.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/testReport/ |
   | Max. process+thread count | 304 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-27 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780013#comment-16780013
 ] 

wujinhu edited comment on HADOOP-15038 at 2/28/19 2:08 AM:
---

Sure. [~uncleGen]

First, I add *hadoop-cloud-core* module according to the discussion above.

Then, I move below classes from *s3guard* to *hadoop-cloud-core*
{code:java}
//sources
DescendantsIterator.java 
DirListingMetadata.java 
ExpirableMetadata.java 
LocalMetadataEntry.java 
LocalMetadataStore.java 
MetadataStore.java 
MetadataStoreCapabilities.java 
MetadataStoreListFilesIterator.java 
NullMetadataStore.java 
PathMetadata.java 
Tristate.java

//tests
AbstractMSContract.java
MetadataStoreTestBase.java
TestDirListingMetadata.java
TestLocalMetadataStore.java
TestNullMetadataStore.java
{code}
 

Besides, I also move some utility constants and methods from *s3guard* to 

 
{code:java}
Constants.java
MetadataUtils.java
MetadataTestUtils.java
{code}
 

So,I almost do not change existing logic.:)


was (Author: wujinhu):
Sure. [~uncleGen]

First, I add *hadoop-cloud-core* module according to the discussion above.

Then, I move below classes from *s3guard* to *hadoop-cloud-core*

 
{code:java}
//sources
{code}
*DescendantsIterator.java
DirListingMetadata.java
ExpirableMetadata.java
LocalMetadataEntry.java
LocalMetadataStore.java
MetadataStore.java
MetadataStoreCapabilities.java
MetadataStoreListFilesIterator.java
NullMetadataStore.java
PathMetadata.java
Tristate.java*
{code:java}
//tests
{code}

*AbstractMSContract.java
MetadataStoreTestBase.java
TestDirListingMetadata.java
TestLocalMetadataStore.java
TestNullMetadataStore.java*

 

Besides, I also move some utility constants and methods from *s3guard* to 

 
{code:java}
Constants.java
MetadataUtils.java
MetadataTestUtils.java
{code}
 

So,I almost do not change existing logic.:)

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-27 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780013#comment-16780013
 ] 

wujinhu commented on HADOOP-15038:
--

Sure. [~uncleGen]

First, I add *hadoop-cloud-core* module according to the discussion above.

Then, I move below classes from *s3guard* to *hadoop-cloud-core*

 
{code:java}
//sources
{code}
*DescendantsIterator.java
DirListingMetadata.java
ExpirableMetadata.java
LocalMetadataEntry.java
LocalMetadataStore.java
MetadataStore.java
MetadataStoreCapabilities.java
MetadataStoreListFilesIterator.java
NullMetadataStore.java
PathMetadata.java
Tristate.java*
{code:java}
//tests
{code}

*AbstractMSContract.java
MetadataStoreTestBase.java
TestDirListingMetadata.java
TestLocalMetadataStore.java
TestNullMetadataStore.java*

 

Besides, I also move some utility constants and methods from *s3guard* to 

 
{code:java}
Constants.java
MetadataUtils.java
MetadataTestUtils.java
{code}
 

So,I almost do not change existing logic.:)

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] avijayanhwx commented on a change in pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
avijayanhwx commented on a change in pull request #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261021548
 
 

 ##
 File path: hadoop-hdds/framework/src/main/resources/webapps/static/ozone.js
 ##
 @@ -308,7 +308,6 @@
 ctrl.convertToArray(response.data);
 ctrl.configs = Object.values(ctrl.keyTagMap);
 ctrl.component = 'All';
-console.log("ajay -> " + JSON.stringify(ctrl.configs));
 
 Review comment:
   I believe this file needs to be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] brownscott commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
brownscott commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui 
is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468104921
 
 
   
   -- 
   Sent from my Android phone with GMX Mail. Please excuse my brevity.On 
2019-02-27, 9:55 p.m. Siddharth  wrote:
   @swagle commented on this pull request.
   
   
   
   In 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java:
   > +import org.junit.Assert;
   +import org.junit.Before;
   +import org.junit.Test;
   +
   +import java.io.BufferedWriter;
   +import java.io.File;
   +import java.io.FileWriter;
   +import java.io.IOException;
   +
   +public class TestOzoneConfiguration {
   +
   +  private Configuration conf;
   +  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
   +  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
   +
   +  private BufferedWriter out;
   
   In general not too excited about sharing streams as a coding best practice, 
others reviewers can chime in but better to localize this vs deal with side 
effect code in tearDown.
   
   —You are receiving this because you are subscribed to this thread.Reply to 
this email directly, view it on GitHub, or mute the thread.
   
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/apache/hadoop","title":"apache/hadoop","subtitle":"GitHub
 
repository","main_image_url":"https://github.githubassets.com/images/email/message_cards/header.png","avatar_image_url":"https://github.githubassets.com/images/email/message_cards/avatar.png","action":{"name":"Open
 in 
GitHub","url":"https://github.com/apache/hadoop"}},"updates":{"snippets":[{"icon":"PERSON","message":"@swagle
 commented on #527"}],"action":{"name":"View Pull 
Request","url":"https://github.com/apache/hadoop/pull/527#pullrequestreview-208858889"}}}
   [
   {
   "@context": "http://schema.org";,
   "@type": "EmailMessage",
   "potentialAction": {
   "@type": "ViewAction",
   "target": 
"https://github.com/apache/hadoop/pull/527#pullrequestreview-208858889";,
   "url": 
"https://github.com/apache/hadoop/pull/527#pullrequestreview-208858889";,
   "name": "View Pull Request"
   },
   "description": "View this Pull Request on GitHub",
   "publisher": {
   "@type": "Organization",
   "name": "GitHub",
   "url": "https://github.com";
   }
   }
   ]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] swagle commented on a change in pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
swagle commented on a change in pull request #527: HDDS-1093. Configuration tab 
in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261020806
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
+  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
+
+  private BufferedWriter out;
 
 Review comment:
   In general not too excited about sharing streams as a coding best practice, 
others reviewers can chime in but better to localize this vs deal with side 
effect code in tearDown.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-02-27 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16152:


 Summary: Upgrade Eclipse Jetty version to 9.4.x
 Key: HADOOP-16152
 URL: https://issues.apache.org/jira/browse/HADOOP-16152
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.2.0
Reporter: Yuming Wang


Some big data projects have been upgraded to 9.4.x, which causes some 
compatibility issues.

Spark: 
https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] swagle commented on a change in pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
swagle commented on a change in pull request #527: HDDS-1093. Configuration tab 
in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261020806
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
+  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
+
+  private BufferedWriter out;
 
 Review comment:
   In general not too excited about sharing streams as a coding best practice, 
other reviewers can chime in but better to localize this vs deal with side 
effect code in tearDown.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-02-27 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16780007#comment-16780007
 ] 

Kai Xie commented on HADOOP-16018:
--

Thanks Steve for reviewing and merging the patch!

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018-branch-2-005.patch, 
> HADOOP-16018-branch-2-006.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] swagle commented on a change in pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
swagle commented on a change in pull request #527: HDDS-1093. Configuration tab 
in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261020412
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
 
 Review comment:
   Better to use junit's TemporaryFolder and let framework take care of the 
cleanup as well as create etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] swagle commented on a change in pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
swagle commented on a change in pull request #527: HDDS-1093. Configuration tab 
in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#discussion_r261019552
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/TestOzoneConfiguration.java
 ##
 @@ -0,0 +1,171 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+
+public class TestOzoneConfiguration {
+
+  private Configuration conf;
+  final static String CONFIG = new 
File("./test-config-TestConfiguration.xml").getAbsolutePath();
+  final static String CONFIG_CORE = new 
File("./core-site.xml").getAbsolutePath();
+
+  private BufferedWriter out;
+
+  @Before
+  public void setUp() throws Exception {
+conf = new OzoneConfiguration();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+if(out != null) {
+  out.close();
+}
+new File(CONFIG).delete();
+new File(CONFIG_CORE).delete();
+  }
+
+  private void startConfig() throws IOException {
+out.write("\n");
+out.write("\n");
+  }
+
+  private void endConfig() throws IOException{
+out.write("\n");
+out.flush();
+out.close();
+  }
+
+  @Test
+  public void testGetAllPropertiesByTags() throws Exception {
+
+try{
+  out = new BufferedWriter(new FileWriter(CONFIG));
+  startConfig();
+  appendProperty("hadoop.tags.system", "YARN,HDFS,NAMENODE");
+  appendProperty("hadoop.tags.custom", "MYCUSTOMTAG");
+  appendPropertyByTag("dfs.cblock.trace.io", "false", "YARN");
+  appendPropertyByTag("dfs.replication", "1", "HDFS");
+  appendPropertyByTag("dfs.namenode.logging.level", "INFO", "NAMENODE");
+  appendPropertyByTag("dfs.random.key", "XYZ", "MYCUSTOMTAG");
+  endConfig();
+
+  Path fileResource = new Path(CONFIG);
+  conf.addResource(fileResource);
+  
assertEq(conf.getAllPropertiesByTag("MYCUSTOMTAG").getProperty("dfs.random.key"),
 "XYZ");
+} finally {
+  out.close();
+}
+try {
 
 Review comment:
   Better to use try construct that closes the Closeable vs explicit close. 
syntactic sugar


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-27 Thread Genmao Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1678#comment-1678
 ] 

Genmao Yu commented on HADOOP-15038:


[~wujinhu] Could you please give a summary of your work, it will be very 
helpful for review. 

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] vivekratnavel commented on issue #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
vivekratnavel commented on issue #527: HDDS-1093. Configuration tab in OM/SCM 
ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468101227
 
 
   @bharatviswa504 @avijayanhwx @arp7 Please review when you find time


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] vivekratnavel opened a new pull request #527: HDDS-1093. Configuration tab in OM/SCM ui is not displaying the correct values

2019-02-27 Thread GitBox
vivekratnavel opened a new pull request #527: HDDS-1093. Configuration tab in 
OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #526: HDDS-1183. Override getDelegationToken 
API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526#issuecomment-468100290
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 58 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1029 | trunk passed |
   | +1 | compile | 970 | trunk passed |
   | +1 | checkstyle | 240 | trunk passed |
   | +1 | mvnsite | 214 | trunk passed |
   | +1 | shadedclient | 1230 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 198 | trunk passed |
   | +1 | javadoc | 129 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 105 | the patch passed |
   | +1 | compile | 960 | the patch passed |
   | +1 | javac | 960 | the patch passed |
   | +1 | checkstyle | 204 | the patch passed |
   | +1 | mvnsite | 128 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 701 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 212 | the patch passed |
   | +1 | javadoc | 121 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 99 | common in the patch failed. |
   | +1 | unit | 48 | common in the patch passed. |
   | +1 | unit | 134 | ozonefs in the patch passed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 6719 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/526 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 3e407a210f51 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ea3cdc6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/testReport/ |
   | Max. process+thread count | 2742 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common hadoop-ozone/ozonefs 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-526/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #524: HDDS-1187. Healthy pipeline Chill Mode rule to consider only pipelines with replication factor three.

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #524: HDDS-1187.  Healthy pipeline Chill Mode 
rule to consider only pipelines with replication factor three.
URL: https://github.com/apache/hadoop/pull/524#issuecomment-468098966
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1144 | trunk passed |
   | +1 | compile | 62 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 744 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 52 | trunk passed |
   | +1 | javadoc | 26 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | +1 | checkstyle | 17 | the patch passed |
   | +1 | mvnsite | 29 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 766 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 110 | server-scm in the patch passed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3286 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/524 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux b4aa3d666b4d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 04b228e |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/2/testReport/ |
   | Max. process+thread count | 418 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779970#comment-16779970
 ] 

Hadoop QA commented on HADOOP-15625:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 54s{color} | {color:orange} root: The patch generated 2 new + 27 unchanged - 
0 fixed = 29 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
14s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
48s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960478/HADOOP-15625-012.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 15fb0d66e5d2 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea3cdc6 |
| maven | versi

[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-27 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779968#comment-16779968
 ] 

Takanobu Asanuma commented on HADOOP-16126:
---

Thanks for your reply, [~szetszwo].

If there are no objections from others, I will backport HADOOP-16126 to 
branch-2 in a couple of days.

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: c16126_20190219.patch, c16126_20190220.patch, 
> c16126_20190221.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #525: HADOOP-16150. ChecksumFileSystem doesn't wrap concat()

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #525: HADOOP-16150. ChecksumFileSystem doesn't 
wrap concat()
URL: https://github.com/apache/hadoop/pull/525#issuecomment-468088437
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1148 | trunk passed |
   | +1 | compile | 950 | trunk passed |
   | +1 | checkstyle | 58 | trunk passed |
   | +1 | mvnsite | 79 | trunk passed |
   | +1 | shadedclient | 812 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 105 | trunk passed |
   | +1 | javadoc | 65 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 43 | the patch passed |
   | +1 | compile | 888 | the patch passed |
   | +1 | javac | 888 | the patch passed |
   | +1 | checkstyle | 58 | the patch passed |
   | +1 | mvnsite | 77 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 104 | the patch passed |
   | +1 | javadoc | 66 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 526 | hadoop-common in the patch passed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 5770 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-525/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/525 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 98d54feea8c0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ea3cdc6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-525/1/testReport/ |
   | Max. process+thread count | 1357 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-525/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] xiaoyuyao opened a new pull request #526: HDDS-1183. Override getDelegationToken API for OzoneFileSystem. Contr…

2019-02-27 Thread GitBox
xiaoyuyao opened a new pull request #526: HDDS-1183. Override 
getDelegationToken API for OzoneFileSystem. Contr…
URL: https://github.com/apache/hadoop/pull/526
 
 
   …ibuted by Xiaoyu Yao.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16126) ipc.Client.stop() may sleep too long to wait for all connections

2019-02-27 Thread Tsz Wo Nicholas Sze (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779887#comment-16779887
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-16126:
--

Thanks [~tasanuma0829].

For this (HADOOP-16126), we may backport it to any branches since it is just a 
very small and safe change.

For HADOOP-16127, let's keep it in branch-3 for the moment to let it get 
stabilized.

> ipc.Client.stop() may sleep too long to wait for all connections
> 
>
> Key: HADOOP-16126
> URL: https://issues.apache.org/jira/browse/HADOOP-16126
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: c16126_20190219.patch, c16126_20190220.patch, 
> c16126_20190221.patch
>
>
> {code}
> //Client.java
>   public void stop() {
> ...
> // wait until all connections are closed
> while (!connections.isEmpty()) {
>   try {
> Thread.sleep(100);
>   } catch (InterruptedException e) {
>   }
> }
> ...
>   }
> {code}
> In the code above, the sleep time is 100ms.  We found that simply changing 
> the sleep time to 10ms could improve a Hive job running time by 10x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779901#comment-16779901
 ] 

Eric Yang commented on HADOOP-16150:


[~ste...@apache.org] Pull request link doesn't seem to include .patch suffix.  
I am not sure if precommit build will pick it up.  

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #524: HDDS-1187. Healthy pipeline Chill Mode rule to consider only pipelines with replication factor three.

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #524: HDDS-1187.  Healthy pipeline Chill Mode 
rule to consider only pipelines with replication factor three.
URL: https://github.com/apache/hadoop/pull/524#issuecomment-468071931
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1096 | trunk passed |
   | +1 | compile | 60 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 774 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 39 | trunk passed |
   | +1 | javadoc | 25 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 800 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 102 | server-scm in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3267 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/524 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 9354ad2acfcb 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / feccd28 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/testReport/ |
   | Max. process+thread count | 402 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-524/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779863#comment-16779863
 ] 

Ben Roling commented on HADOOP-15625:
-

Patch 012 fixes failure in hadoop.conf.TestCommonConfigurationFields.  New 
config keys had the wrong prefix (fs.s3 instead of fs.s3a).

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-15625:

Attachment: HADOOP-15625-012.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch, HADOOP-15625-012.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16150) checksumFS doesn't wrap concat(): concatenated files don't have checksums

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779862#comment-16779862
 ] 

Steve Loughran commented on HADOOP-16150:
-

Created a github PR. I am exploring how well the new workflow works. First 
conclusion: we need to make sure the git commit text includes author

> checksumFS doesn't wrap concat(): concatenated files don't have checksums
> -
>
> Key: HADOOP-16150
> URL: https://issues.apache.org/jira/browse/HADOOP-16150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> Followon from HADOOP-16107. FilterFS passes through the concat operation, and 
> checksum FS doesn't override that call -so files created through concat *do 
> not have checksums*.
> If people are using a checksummed fs directly with the expectations that they 
> will, that expectation is not being met. 
> What to do?
> * fail always?
> * fail if checksums are enabled?
> * try and implement the concat operation from raw local up at the checksum 
> level
> append() just gives up always; doing the same for concat would be the 
> simplest. Again, brings us back to "need a way to see if an FS supports a 
> feature before invocation", here checksum fs would reject append and concat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran opened a new pull request #525: HADOOP-16150. checksumFS doesn't wrap concat()

2019-02-27 Thread GitBox
steveloughran opened a new pull request #525: HADOOP-16150. checksumFS doesn't 
wrap concat()
URL: https://github.com/apache/hadoop/pull/525
 
 
   HADOOP-16150. checksumFS doesn't wrap concat(): concatenated files do…
   
   This intercepts concat() To throw an UnsupportedOperationException.
   
   It also disables the test TestLocalFSContractMultipartUploader, as the 
service-loader mechanism to create an MPU uploader needs to be replaced by an 
API call in the filesystems, as proposed by HDFS-13934
   
   Contributed by Steve Loughran.
   
   Change-Id: I85fc1fc9445ca0b7d325495d3bc55fe9f5e5ce52


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16102) FilterFileSystem does not implement getScheme

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779847#comment-16779847
 ] 

Steve Loughran commented on HADOOP-16102:
-

bq. It might have been useful to make getScheme an abstract method instead, but 
it's too late to change it at this point.

yeah. Now, if you look at HADOOP-14132 I want to move us off loading Filesystem 
classes and have some simple class "with no classpath dependencies" to load. 
when someone gets round to doing that, they need to  make sure things are 
abstract

> FilterFileSystem does not implement getScheme
> -
>
> Key: HADOOP-16102
> URL: https://issues.apache.org/jira/browse/HADOOP-16102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Todd Owen
>Priority: Minor
>
> Calling {{getScheme}} on a {{FilterFileSystem}} throws 
> {{UnsupportedOperationException}}, which is the default provided by the base 
> class. Instead, it should return the scheme of the underlying ("filtered") 
> filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16102) FilterFileSystem does not implement getScheme

2019-02-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16102.
-
Resolution: Won't Fix

> FilterFileSystem does not implement getScheme
> -
>
> Key: HADOOP-16102
> URL: https://issues.apache.org/jira/browse/HADOOP-16102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Todd Owen
>Priority: Minor
>
> Calling {{getScheme}} on a {{FilterFileSystem}} throws 
> {{UnsupportedOperationException}}, which is the default provided by the base 
> class. Instead, it should return the scheme of the underlying ("filtered") 
> filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14132) Filesystem discovery to stop loading implementation classes

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779850#comment-16779850
 ] 

Steve Loughran commented on HADOOP-14132:
-

HADOOP-16102 shows a different problem with the current mechanism: filter 
filesystem subclasses

Any new load/enum mechanism needs to avoid problems like that

> Filesystem discovery to stop loading implementation classes
> ---
>
> Key: HADOOP-14132
> URL: https://issues.apache.org/jira/browse/HADOOP-14132
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Integration testing of Hadoop with the HADOOP-14040 has shown up that the 
> move to a shaded AWS JAR is slowing all hadoop client code down.
> I believe this is due to how we use service discovery to identify FS 
> implementations: the implementation classes themselves are instantiated.
> This has known problems today with classloading, but clearly impacts 
> performance too, especially with complex transitive dependencies unique to 
> the loaded class.
> Proposed: have lightweight service declaration classes which implement an 
> interface declaring
> # schema
> # classname of FileSystem impl
> # classname of AbstractFS impl
> # homepage (for third party code, support, etc)
> These are what we register and scan in the FS to look for services.
> This will leave the question about what to do for existing filesystems? I 
> think we'll need to retain the old code for external ones, while moving the 
> hadoop modules to the new ones



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779838#comment-16779838
 ] 

Hadoop QA commented on HADOOP-15625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 46s{color} | {color:orange} root: The patch generated 2 new + 27 unchanged - 
0 fixed = 29 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 49s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
45s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960444/HADOOP-15625-011.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 4badb0025d42 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommi

[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779836#comment-16779836
 ] 

Steve Loughran commented on HADOOP-16107:
-

thanks for your help here

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 opened a new pull request #524: HDDS-1187. Healthy pipeline Chill Mode rule to consider only pipelines with replication factor three.

2019-02-27 Thread GitBox
bharatviswa504 opened a new pull request #524: HDDS-1187.  Healthy pipeline 
Chill Mode rule to consider only pipelines with replication factor three.
URL: https://github.com/apache/hadoop/pull/524
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16149) hadoop-mapreduce-client-app build not converging due to transient dependencies

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779829#comment-16779829
 ] 

Steve Loughran commented on HADOOP-16149:
-

the patch was +1 and resolved on github. Looks like the discussion there isn't 
automatically making it back to here (yet)

> hadoop-mapreduce-client-app build not converging due to transient dependencies
> --
>
> Key: HADOOP-16149
> URL: https://issues.apache.org/jira/browse/HADOOP-16149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Clean build of trunk failing today. No obvious change locally except I did 
> accidentally kick off a build which may have pulled in some -SNAPSHOT 
> artifacts from asf snapshot repo
> {code}
> Dependency convergence error for org.hamcrest:hamcrest-core:1.1 paths to 
> dependency are:
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-com.github.stefanbirkner:system-rules:1.18.0
> +-junit:junit-dep:4.11.20120805.1225
>   +-org.hamcrest:hamcrest-core:1.1
> and
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-junit:junit:4.12
> +-org.hamcrest:hamcrest-core:1.3
> {code}
> This goes away if system-rules excludes junit and hamcrest



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16149) hadoop-mapreduce-client-app build not converging due to transient dependencies

2019-02-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16149.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

> hadoop-mapreduce-client-app build not converging due to transient dependencies
> --
>
> Key: HADOOP-16149
> URL: https://issues.apache.org/jira/browse/HADOOP-16149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> Clean build of trunk failing today. No obvious change locally except I did 
> accidentally kick off a build which may have pulled in some -SNAPSHOT 
> artifacts from asf snapshot repo
> {code}
> Dependency convergence error for org.hamcrest:hamcrest-core:1.1 paths to 
> dependency are:
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-com.github.stefanbirkner:system-rules:1.18.0
> +-junit:junit-dep:4.11.20120805.1225
>   +-org.hamcrest:hamcrest-core:1.1
> and
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-junit:junit:4.12
> +-org.hamcrest:hamcrest-core:1.3
> {code}
> This goes away if system-rules excludes junit and hamcrest



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] steveloughran commented on issue #520: HADOOP-16149 hadoop-mapreduce-client-app build not converging

2019-02-27 Thread GitBox
steveloughran commented on issue #520: HADOOP-16149 hadoop-mapreduce-client-app 
build not converging
URL: https://github.com/apache/hadoop/pull/520#issuecomment-468051102
 
 
   thanks @billierinaldi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2019-02-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16018:

   Resolution: Fixed
Fix Version/s: 2.9.3
   Status: Resolved  (was: Patch Available)

+1, committed to branches 2.9 & 2; retested the modified test before each commit

closing as fixed. Thanks!

> DistCp won't reassemble chunks when blocks per chunk > 0
> 
>
> Key: HADOOP-16018
> URL: https://issues.apache.org/jira/browse/HADOOP-16018
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Fix For: 3.0.4, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HADOOP-16018-002.patch, HADOOP-16018-branch-2-002.patch, 
> HADOOP-16018-branch-2-002.patch, HADOOP-16018-branch-2-003.patch, 
> HADOOP-16018-branch-2-004.patch, HADOOP-16018-branch-2-004.patch, 
> HADOOP-16018-branch-2-005.patch, HADOOP-16018-branch-2-005.patch, 
> HADOOP-16018-branch-2-006.patch, HADOOP-16018.01.patch
>
>
> I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the 
> same file when blocks per chunk has been set > 0.
> In the CopyCommitter::commitJob, this logic can prevent chunks from 
> reassembling if blocks per chunk is equal to 0:
> {code:java}
> if (blocksPerChunk > 0) {
>   concatFileChunks(conf);
> }
> {code}
> Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:
> {code:java}
> blocksPerChunk = context.getConfiguration().getInt(
> DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
> {code}
>  
> But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
> will always returns empty string because it is constructed without config 
> label:
> {code:java}
> BLOCKS_PER_CHUNK("",
> new Option("blocksperchunk", true, "If set to a positive value, files"
> + "with more blocks than this value will be split into chunks of "
> + " blocks to be transferred in parallel, and "
> + "reassembled on the destination. By default,  is "
> + "0 and the files will be transmitted in their entirety without "
> + "splitting. This switch is only applicable when the source file "
> + "system implements getBlockLocations method and the target file "
> + "system implements concat method"))
> {code}
> As a result it will fall back to the default value 0 for blocksPerChunk, and 
> prevent the chunks from reassembling.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779813#comment-16779813
 ] 

Hadoop QA commented on HADOOP-15625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 45s{color} | {color:orange} root: The patch generated 2 new + 27 unchanged - 
0 fixed = 29 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960442/HADOOP-15625-010.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 00116eb2e206 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779773#comment-16779773
 ] 

Hadoop QA commented on HADOOP-15625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m  3s{color} | {color:orange} root: The patch generated 1 new + 27 unchanged - 
0 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 40s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
58s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960428/HADOOP-15625-009.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 882ad5558a70 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64

[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779771#comment-16779771
 ] 

Hudson commented on HADOOP-16107:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16084 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16084/])
HADOOP-16107.  Update ChecksumFileSystem createFile/openFile API to (eyang: rev 
feccd282febb5fe5d043480ba989b6f045409efa)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/TestFutureIO.java


> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 edited a comment on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-27 Thread GitBox
bharatviswa504 edited a comment on issue #502: HDDS-919. Enable prometheus 
endpoints for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468030244
 
 
   Thank You @elek  for the update.
   
   One minor comment: We dont need the change in 
hadoop-hdds/container-service/pom.xml.
   
   As we have already those dependencies in Line 36-39. You can take care of 
this during commit.
   
   ```
   
 org.apache.hadoop
 hadoop-hdds-server-framework
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-27 Thread GitBox
bharatviswa504 commented on issue #502: HDDS-919. Enable prometheus endpoints 
for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-468030244
 
 
   Thank You @elek  for the update.
   
   One minor comment: We dont need the change in 
hadoop-hdds/container-service/pom.xml.
   
   As we have already those dependencies in Line 36-39. You can take care of 
this during commit.
   
 org.apache.hadoop
 hadoop-hdds-server-framework
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779761#comment-16779761
 ] 

Eric Yang edited comment on HADOOP-16107 at 2/27/19 9:01 PM:
-

+1 Thank you [~ste...@apache.org] for the patch.  Patch 3 committed to trunk 
with white space fixed.
Thank you [~iwasakims] for the review.


was (Author: eyang):
+1 Thank you [~ste...@apache.org] for the patch.  Patch 3 committed to trunk 
with white space fixed.

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16149) hadoop-mapreduce-client-app build not converging due to transient dependencies

2019-02-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779764#comment-16779764
 ] 

Hudson commented on HADOOP-16149:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16083 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16083/])
HADOOP-16149 hadoop-mapreduce-client-app build not converging due to 
(billie.rinaldi: rev 5b43e42d0c6376a075d610ca30425b5db5968689)
* (edit) hadoop-project/pom.xml


> hadoop-mapreduce-client-app build not converging due to transient dependencies
> --
>
> Key: HADOOP-16149
> URL: https://issues.apache.org/jira/browse/HADOOP-16149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Clean build of trunk failing today. No obvious change locally except I did 
> accidentally kick off a build which may have pulled in some -SNAPSHOT 
> artifacts from asf snapshot repo
> {code}
> Dependency convergence error for org.hamcrest:hamcrest-core:1.1 paths to 
> dependency are:
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-com.github.stefanbirkner:system-rules:1.18.0
> +-junit:junit-dep:4.11.20120805.1225
>   +-org.hamcrest:hamcrest-core:1.1
> and
> +-org.apache.hadoop:hadoop-mapreduce-client-app:3.3.0-SNAPSHOT
>   +-junit:junit:4.12
> +-org.hamcrest:hamcrest-core:1.3
> {code}
> This goes away if system-rules excludes junit and hamcrest



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-27 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16107:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Thank you [~ste...@apache.org] for the patch.  Patch 3 committed to trunk with 
white space fixed.

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-27 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779761#comment-16779761
 ] 

Eric Yang edited comment on HADOOP-16107 at 2/27/19 8:58 PM:
-

+1 Thank you [~ste...@apache.org] for the patch.  Patch 3 committed to trunk 
with white space fixed.


was (Author: eyang):
Thank you [~ste...@apache.org] for the patch.  Patch 3 committed to trunk with 
white space fixed.

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-15625:

Attachment: HADOOP-15625-011.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779750#comment-16779750
 ] 

Ben Roling commented on HADOOP-15625:
-

011 patch addresses "The patch has 13 line(s) that end in whitespace." problem.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch, 
> HADOOP-15625-011.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] billierinaldi merged pull request #520: HADOOP-16149 hadoop-mapreduce-client-app build not converging

2019-02-27 Thread GitBox
billierinaldi merged pull request #520: HADOOP-16149 
hadoop-mapreduce-client-app build not converging
URL: https://github.com/apache/hadoop/pull/520
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 closed pull request #519: HDDS-1180. TestRandomKeyGenerator fails with NPE

2019-02-27 Thread GitBox
bharatviswa504 closed pull request #519: HDDS-1180. TestRandomKeyGenerator 
fails with NPE
URL: https://github.com/apache/hadoop/pull/519
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #519: HDDS-1180. TestRandomKeyGenerator fails with NPE

2019-02-27 Thread GitBox
bharatviswa504 commented on issue #519: HDDS-1180. TestRandomKeyGenerator fails 
with NPE
URL: https://github.com/apache/hadoop/pull/519#issuecomment-468022011
 
 
   This has been already fixed and committed by HDDS-1174.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] bharatviswa504 commented on issue #519: HDDS-1180. TestRandomKeyGenerator fails with NPE

2019-02-27 Thread GitBox
bharatviswa504 commented on issue #519: HDDS-1180. TestRandomKeyGenerator fails 
with NPE
URL: https://github.com/apache/hadoop/pull/519#issuecomment-468018264
 
 
   +1 LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779741#comment-16779741
 ] 

Ben Roling commented on HADOOP-15625:
-

bq. sorry, I must have meant core-default.xml in hadoop common.

Yep, [~noslowerdna] helped me realize that.  I addressed it in the 008 patch.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779740#comment-16779740
 ] 

Ben Roling commented on HADOOP-15625:
-

010 patch addresses noisy warning logging.

Steve - have another look and let me know if there is anything else you think I 
should do.

I do see my latest patches are failing with "The patch has 13 line(s) that end 
in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply";.  I need to figure out what that is about 
and correct it.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-15625:

Attachment: HADOOP-15625-010.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch, HADOOP-15625-010.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779735#comment-16779735
 ] 

Steve Loughran commented on HADOOP-15625:
-

thanks, I'll try and have a look tomorrow.

bq. I also didn't address core-site.xml. To be clear there, you're talking 
about the src/test/resources/core-site.xml, right?

sorry, I must have meant core-default.xml in hadoop common. We keep the default 
values declared in there (too), so that any automated tooling can generate HTML 
or other docs from it

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15862) ABFS to support a Delegation Token provider which marshalls current Oauth secrets

2019-02-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15862:

Summary: ABFS to support a Delegation Token provider which marshalls 
current Oauth secrets  (was: ABFS to support a Delegation Token provider which 
marshalls current login secrets)

> ABFS to support a Delegation Token provider which marshalls current Oauth 
> secrets
> -
>
> Key: HADOOP-15862
> URL: https://issues.apache.org/jira/browse/HADOOP-15862
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> ABFS has an extension point for generating delegation tokens, presumably the 
> implementation is actually using Kerberos to generate some secrets to pass 
> around.
> HADOOP-14556 shows how an object store can actually implement DTs which 
> marshall full credentials over the wire to remote services, so allowing users 
> to submit queries to shared clusters. This isn't as secure as kerberos, but 
> does let users access their private data.
> (This JIRA is avoiding worrying about session & role auth, just taking the 
> config options for login and marshalling as a DT)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16103) Failure of ABFS test ITestAbfsIdentityTransformer

2019-02-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16103.
-
Resolution: Cannot Reproduce

> Failure of ABFS test ITestAbfsIdentityTransformer
> -
>
> Key: HADOOP-16103
> URL: https://issues.apache.org/jira/browse/HADOOP-16103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> The test {{ITestAbfsIdentityTransformer}} of HADOOP-15954  is failing, "There 
> is no primary group for UGI alice/localh...@example.com (auth:KERBEROS)"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16136) ABFS: Should only transform username to short name

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779692#comment-16779692
 ] 

Steve Loughran commented on HADOOP-16136:
-

+1 and in trunk. Shall I backport to branch-3.2 too?

> ABFS: Should only transform username to short name
> --
>
> Key: HADOOP-16136
> URL: https://issues.apache.org/jira/browse/HADOOP-16136
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16136-001.patch
>
>
> When short name is enabled, IdentityTransformer should only transform user 
> name to a short name, and the group name should remains.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15582) Document ABFS

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779690#comment-16779690
 ] 

Steve Loughran commented on HADOOP-15582:
-

HADOOP-16068 contains the first attempt of this

> Document ABFS
> -
>
> Key: HADOOP-15582
> URL: https://issues.apache.org/jira/browse/HADOOP-15582
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Thomas Marquardt
>Priority: Major
>
> Add documentation for abfs under 
> {{hadoop-tools/hadoop-azure/src/site/markdown}}
> Possible topics include
> * intro to scheme
> * why abfs (link to MSDN, etc)
> * config options
> * switching from wasb/interop
> * troubleshooting
> testing.md should add a section on testing this stuff too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779674#comment-16779674
 ] 

Ben Roling commented on HADOOP-15625:
-

009 patch uploaded fixes line length, javadoc, and checkstyle issues.

Still remaining is the potentially noisy warning condition.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779670#comment-16779670
 ] 

Hadoop QA commented on HADOOP-15625:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 26s{color} | {color:orange} root: The patch generated 39 new + 27 unchanged 
- 0 fixed = 66 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
43s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960408/HADOOP-15625-008.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 358089bffbab 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_

[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-15625:

Attachment: HADOOP-15625-009.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch, HADOOP-15625-009.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF

2019-02-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16109:

Description: 
When using S3AFileSystem to read Parquet files a specific set of circumstances 
causes an  EOFException that is not thrown when reading the same file from 
local disk

Note this has only been observed under specific circumstances:
  - when the reader is doing a projection (will cause it to do a seek backwards 
and put the filesystem into random mode)
 - when the file is larger than the readahead buffer size
 - when the seek behavior of the Parquet reader causes the reader to seek 
towards the end of the current input stream without reopening, such that the 
next read on the currently open stream will read past the end of the currently 
open stream.

Exception from Parquet reader is as follows:
{code}
Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
to read
 at 
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
 at 
org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
 at 
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
 at 
org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
 at 
org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
 at 
org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
 at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
 at 
org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
 at 
org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
 at 
org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199)
 at 
org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190)
 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
 at java.lang.Thread.run(Thread.java:748)
{code}
The following example program generate the same root behavior (sans finding a 
Parquet file that happens to trigger this condition) by purposely reading past 
the already active readahead range on any file >= 1029 bytes in size.. 


{code:java}
final Configuration conf = new Configuration();
conf.set("fs.s3a.readahead.range", "1K");
conf.set("fs.s3a.experimental.input.fadvise", "random");

final FileSystem fs = FileSystem.get(path.toUri(), conf);
// forward seek reading across readahead boundary
try (FSDataInputStream in = fs.open(path)) {
final byte[] temp = new byte[5];
in.readByte();
in.readFully(1023, temp); // <-- works
}
// forward seek reading from end of readahead boundary
try (FSDataInputStream in = fs.open(path)) {
 final byte[] temp = new byte[5];
 in.readByte();
 in.readFully(1024, temp); // <-- throws EOFException
}
{code}
 

  was:
When using S3AFileSystem to read Parquet files a specific set of circumstances 
causes an  EOFException that is not thrown when reading the same file from 
local disk

Note this has only been observed under specific circumstances:
  - when the reader is doing a projection (will cause it to do a seek backwards 
and put the filesystem into random mode)
 - when the file is larger than the readahead buffer size
 - when the seek behavior of the Parquet reader causes the reader to seek 
towards the end of the current input stream without reopening, such that the 
next read on the currently open stream will read past the end of the currently 
open stream.

Exception from Parquet reader is as follows:

Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left 
to read
 at 
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104)
 at 
org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
 at 
org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
 at 
org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174)
 at 
org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805)
 at 
org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
 at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
 at 
org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207)
 at 
org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206)
 at 
org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputForm

[jira] [Updated] (HADOOP-16136) ABFS: Should only transform username to short name

2019-02-27 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16136:

   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> ABFS: Should only transform username to short name
> --
>
> Key: HADOOP-16136
> URL: https://issues.apache.org/jira/browse/HADOOP-16136
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16136-001.patch
>
>
> When short name is enabled, IdentityTransformer should only transform user 
> name to a short name, and the group name should remains.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16058) S3A tests to include Terasort

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779545#comment-16779545
 ] 

Steve Loughran commented on HADOOP-16058:
-

TestJobCounters is a known failure addressed by HADOOP-16107

> S3A tests to include Terasort
> -
>
> Key: HADOOP-16058
> URL: https://issues.apache.org/jira/browse/HADOOP-16058
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16058-001.patch, HADOOP-16058-002.patch, 
> HADOOP-16058-002.patch, HADOOP-16058-002.patch
>
>
> Add S3A tests to run terasort for the magic and directory committers.
> MAPREDUCE-7091 is a requirement for this
> Bonus feature: print the results to see which committers are faster in the 
> specific test setup. As that's a function of latency to the store, bandwidth 
> and size of jobs, it's not at all meaningful, just interesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779540#comment-16779540
 ] 

Hadoop QA commented on HADOOP-16147:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m  
8s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16147 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960394/HADOOP-16147-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f3a4db7edc96 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c8c422 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15988/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15988/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Allow CopyListing seque

[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779534#comment-16779534
 ] 

Ben Roling commented on HADOOP-15625:
-

I've uploaded a new patch:
* added configs to core-default.xml
* introduced NoVersionAttributeException for version required condition
* added documentation to index.md
* improved documentation in troubleshooting_s3a.md

Next on my list is:
* clean up line length and javadoc style issues
* revisit potentially noisy warnings

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2019-02-27 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-15625:

Attachment: HADOOP-15625-008.patch

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP--15625-006.patch, HADOOP-15625-001.patch, 
> HADOOP-15625-002.patch, HADOOP-15625-003.patch, HADOOP-15625-004.patch, 
> HADOOP-15625-005.patch, HADOOP-15625-006.patch, HADOOP-15625-007.patch, 
> HADOOP-15625-008.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779524#comment-16779524
 ] 

Hadoop QA commented on HADOOP-16147:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
22s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/2/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/517 |
| JIRA Issue | HADOOP-16147 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 32c6e768a325 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 6c8c422 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/2/testReport/ |
| Max. process+thread count | 445 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/2/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> Allow CopyListing sequence file keys and va

[GitHub] hadoop-yetus commented on issue #517: HADOOP-16147: Allow CopyListing sequence file keys and values to be m…

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #517: HADOOP-16147: Allow CopyListing sequence 
file keys and values to be m…
URL: https://github.com/apache/hadoop/pull/517#issuecomment-467948117
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 973 | trunk passed |
   | +1 | compile | 27 | trunk passed |
   | +1 | checkstyle | 21 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 717 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 34 | trunk passed |
   | +1 | javadoc | 16 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 25 | the patch passed |
   | +1 | compile | 22 | the patch passed |
   | +1 | javac | 22 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 24 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 714 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 42 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 742 | hadoop-distcp in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3554 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/517 |
   | JIRA Issue | HADOOP-16147 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 32c6e768a325 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6c8c422 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/2/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-517/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #490: HDDS-1113. Remove default dependencies from hadoop-ozone project

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #490: HDDS-1113. Remove default dependencies 
from hadoop-ozone project
URL: https://github.com/apache/hadoop/pull/490#issuecomment-467946883
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 977 | trunk passed |
   | -1 | compile | 93 | hadoop-ozone in trunk failed. |
   | -1 | mvnsite | 98 | hadoop-ozone in trunk failed. |
   | +1 | shadedclient | 2133 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 221 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 501 | the patch passed |
   | -1 | compile | 97 | hadoop-ozone in the patch failed. |
   | -1 | javac | 97 | hadoop-ozone in the patch failed. |
   | -1 | mvnsite | 91 | hadoop-ozone in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 11 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 760 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 203 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 608 | hadoop-ozone in the patch failed. |
   | +1 | unit | 38 | common in the patch passed. |
   | +1 | unit | 28 | client in the patch passed. |
   | +1 | unit | 37 | ozone-manager in the patch passed. |
   | +1 | unit | 28 | objectstore-service in the patch passed. |
   | +1 | unit | 37 | s3gateway in the patch passed. |
   | -1 | unit | 539 | integration-test in the patch failed. |
   | -1 | unit | 74 | tools in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 5788 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.freon.TestFreonWithDatanodeFastRestart |
   |   | hadoop.ozone.freon.TestRandomKeyGenerator |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.freon.TestFreonWithDatanodeFastRestart |
   |   | hadoop.ozone.freon.TestRandomKeyGenerator |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/490 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  xml  |
   | uname | Linux 5b565315a3f0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6c8c422 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/branch-mvnsite-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/patch-mvnsite-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/artifact/out/patch-unit-hadoop-ozone_tools.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/testReport/ |
   | Max. process+thread count | 3725 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/s3gateway hadoop-ozone/integration-test hadoop-ozone/tools U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-490/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

-

[jira] [Commented] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-27 Thread Andrew Olson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779476#comment-16779476
 ] 

Andrew Olson commented on HADOOP-16147:
---

Attached patch update to address the checkstyle issues.

> Allow CopyListing sequence file keys and values to be more easily customized
> 
>
> Key: HADOOP-16147
> URL: https://issues.apache.org/jira/browse/HADOOP-16147
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Major
> Attachments: HADOOP-16147-001.patch, HADOOP-16147-002.patch
>
>
> We have encountered a scenario where, when using the Crunch library to run a 
> distributed copy (CRUNCH-660, CRUNCH-675) at the conclusion of a job we need 
> to dynamically rename target paths to the preferred destination output part 
> file names, rather than retaining the original source path names.
> A custom CopyListing implementation appears to be the proper solution for 
> this. However the place where the current SimpleCopyListing logic needs to be 
> adjusted is in a private method (writeToFileListing), so a relatively large 
> portion of the class would need to be cloned.
> To minimize the amount of code duplication required for such a custom 
> implementation, we propose adding two new protected methods to the 
> CopyListing class, that can be used to change the actual keys and/or values 
> written to the copy listing sequence file: 
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus);
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus);
> {noformat}
> The SimpleCopyListing class would then be modified to consume these methods 
> as follows,
> {noformat}
> fileListWriter.append(
>getFileListingKey(sourcePathRoot, fileStatus),
>getFileListingValue(fileStatus));
> {noformat}
> The default implementations would simply preserve the present behavior of the 
> SimpleCopyListing class, and could reside in either CopyListing or 
> SimpleCopyListing, whichever is preferable.
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus) {
>return new Text(DistCpUtils.getRelativePath(sourcePathRoot, 
> fileStatus.getPath()));
> }
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus) {
>return fileStatus;
> }
> {noformat}
> Please let me know if this proposal seems to be on the right track. If so I 
> can provide a patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-02-27 Thread Andrew Olson (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Olson updated HADOOP-16147:
--
Attachment: HADOOP-16147-002.patch

> Allow CopyListing sequence file keys and values to be more easily customized
> 
>
> Key: HADOOP-16147
> URL: https://issues.apache.org/jira/browse/HADOOP-16147
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Major
> Attachments: HADOOP-16147-001.patch, HADOOP-16147-002.patch
>
>
> We have encountered a scenario where, when using the Crunch library to run a 
> distributed copy (CRUNCH-660, CRUNCH-675) at the conclusion of a job we need 
> to dynamically rename target paths to the preferred destination output part 
> file names, rather than retaining the original source path names.
> A custom CopyListing implementation appears to be the proper solution for 
> this. However the place where the current SimpleCopyListing logic needs to be 
> adjusted is in a private method (writeToFileListing), so a relatively large 
> portion of the class would need to be cloned.
> To minimize the amount of code duplication required for such a custom 
> implementation, we propose adding two new protected methods to the 
> CopyListing class, that can be used to change the actual keys and/or values 
> written to the copy listing sequence file: 
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus);
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus);
> {noformat}
> The SimpleCopyListing class would then be modified to consume these methods 
> as follows,
> {noformat}
> fileListWriter.append(
>getFileListingKey(sourcePathRoot, fileStatus),
>getFileListingValue(fileStatus));
> {noformat}
> The default implementations would simply preserve the present behavior of the 
> SimpleCopyListing class, and could reside in either CopyListing or 
> SimpleCopyListing, whichever is preferable.
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus) {
>return new Text(DistCpUtils.getRelativePath(sourcePathRoot, 
> fileStatus.getPath()));
> }
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus) {
>return fileStatus;
> }
> {noformat}
> Please let me know if this proposal seems to be on the right track. If so I 
> can provide a patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] noslowerdna commented on issue #517: HADOOP-16147: Allow CopyListing sequence file keys and values to be m…

2019-02-27 Thread GitBox
noslowerdna commented on issue #517: HADOOP-16147: Allow CopyListing sequence 
file keys and values to be m…
URL: https://github.com/apache/hadoop/pull/517#issuecomment-467920480
 
 
   Responding to @hadoop-yetus,
   
   > Please justify why no new tests are needed for this patch. 
   
   Only minor refactoring of current code was done, allowing for the more 
specific behavior override. No functionality was modified.
   
   > Also please list what manual steps were performed to verify this patch.
   
   In a separate project, we created a custom `CopyListing` implementation with 
the `getFileListingKey` method overridden to return a different key, and then 
successfully ran a distributed copy producing the desired alternative target 
paths by setting `distcp.copy.listing.class` to the name of that custom class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #523: HDDS-623. On SCM UI, Node Manager info is 
empty
URL: https://github.com/apache/hadoop/pull/523#issuecomment-467906828
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1027 | trunk passed |
   | +1 | compile | 46 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 1769 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 25 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 746 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 23 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 102 | server-scm in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 2940 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/523 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  |
   | uname | Linux 81a86b7df65c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6c8c422 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/1/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-523/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-27 Thread wujinhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779423#comment-16779423
 ] 

wujinhu commented on HADOOP-15038:
--

Upload 001.patch for discussion.

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.2.0, 2.9.2, 3.0.3, 3.1.2
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16107) FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC logic

2019-02-27 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779422#comment-16779422
 ] 

Masatake Iwasaki commented on HADOOP-16107:
---

The attached 003 fixed failure of TestJobCounters on my local too. LGTM.

For reviewers' convenience,
{code:java}
[ERROR] Failures: 
[ERROR]   
TestLocalFileSystem.testCRCwithCreateChecksumOpt:887->assertWritesCRC:827 Bytes 
written in create with checksum opt; stats=169 bytes read, 193 bytes written, 0 
read ops, 0 large read ops, 0 write ops expected:<22> but was:<10>
[ERROR]   
TestLocalFileSystem.testCRCwithCreateNonRecursiveCreateFlags:930->assertWritesCRC:827
 Bytes written in create with checksum opt; stats=181 bytes read, 220 bytes 
written, 0 read ops, 0 large read ops, 0 write ops expected:<22> but was:<10>
[ERROR]   
TestLocalFileSystem.testReadIncludesCRCwithBuilders:958->assertWritesCRC:827 
Bytes written in createFile(); stats=3245672 bytes read, 192167 bytes written, 
0 read ops, 0 large read ops, 0 write ops expected:<22> but was:<10>
[ERROR]   
TestLocalFileSystem.testWriteWithBuildersRecursive:995->assertWritesCRC:827 
Bytes written in createFile(); stats=64 bytes read, 105 bytes written, 0 read 
ops, 0 large read ops, 0 write ops expected:<22> but was:<10>
{code}
When I applied only TestLocalFileSystem part of the patch, 4 tests failed. 
TestLocalFileSystem#testCRCwithClassicAPIs and 
TestLocalFileSystem#testCRCwithCreate7 succeeded without the fix. I thought 
these non-builder APIs are relevant to MAPREDUCE-7184 at first glance but not.

> FilterFileSystem doesn't wrap all create() or new builder calls; may skip CRC 
> logic
> ---
>
> Key: HADOOP-16107
> URL: https://issues.apache.org/jira/browse/HADOOP-16107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.3, 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-16107-001.patch, HADOOP-16107-003.patch
>
>
> LocalFS is a subclass of filterFS, but overrides create and open so that 
> checksums are created and read. 
> MAPREDUCE-7184 has thrown up that the new builder openFile() call is being 
> forwarded to the innerFS without CRC checking. Reviewing/fixing that has 
> shown that some of the create methods aren't being correctly wrapped, so not 
> generating CRCs
> * createFile() builder
> The following create calls
> {code}
>   public FSDataOutputStream createNonRecursive(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress) throws IOException;
>   public FSDataOutputStream create(final Path f,
>   final FsPermission permission,
>   final EnumSet flags,
>   final int bufferSize,
>   final short replication,
>   final long blockSize,
>   final Progressable progress,
>   final Options.ChecksumOpt checksumOpt) throws IOException {
> return super.create(f, permission, flags, bufferSize, replication,
> blockSize, progress, checksumOpt);
>   }
> {code}
> This means that applications using these methods, directly or indirectly to 
> create files aren't actually generating checksums.
> Fix: implement these methods & relay to local create calls, not to the inner 
> FS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-02-27 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779400#comment-16779400
 ] 

Gabor Bota commented on HADOOP-15999:
-

Downloaded the patch and looking into the tombstone problem.

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-02-27 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779364#comment-16779364
 ] 

Steve Loughran commented on HADOOP-16068:
-

checkstyle
{code}
./hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/extensions/TestCustomOauthTokenProvider.java:32:import
 static org.apache.hadoop.fs.azurebfs.extensions.WrappingTokenProvider.*;: 
Using the '.*' form of import should be avoided - 
org.apache.hadoop.fs.azurebfs.extensions.WrappingTokenProvider.*. 
[AvoidStarImport]
{code}

I'm not worried about that for static imports, but will switch it just to keep 
the complaints down. Other than that, this patch is ready to go in unless 
someone has fundamental issues with the proposed interface...and if they do, we 
can change that after. The key thing is that this improves the binding of auth 
and DT issuing by allowing them to use the store URI to bind

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch, 
> HADOOP-16068-003.patch, HADOOP-16068-004.patch, HADOOP-16068-005.patch, 
> HADOOP-16068-006.patch, HADOOP-16068-007.patch, HADOOP-16068-008.patch, 
> HADOOP-16068-009.patch, HADOOP-16068-010.patch, HADOOP-16068-011.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek opened a new pull request #523: HDDS-623. On SCM UI, Node Manager info is empty

2019-02-27 Thread GitBox
elek opened a new pull request #523: HDDS-623. On SCM UI, Node Manager info is 
empty
URL: https://github.com/apache/hadoop/pull/523
 
 
   Fields like below are empty
   
   Node Manager: Minimum chill mode nodes 
   Node Manager: Out-of-node chill mode 
   Node Manager: Chill mode status 
   Node Manager: Manual chill mode
   
   Please see attached screenshot !Screen Shot 2018-10-10 at 4.19.59 PM.png!
   
   See: https://issues.apache.org/jira/browse/HDDS-623


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop-yetus commented on issue #521: YARN-9065

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #521: YARN-9065
URL: https://github.com/apache/hadoop/pull/521#issuecomment-467884400
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for branch |
   | +1 | mvninstall | 961 | trunk passed |
   | +1 | compile | 490 | trunk passed |
   | +1 | checkstyle | 91 | trunk passed |
   | +1 | mvnsite | 93 | trunk passed |
   | +1 | shadedclient | 822 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 164 | trunk passed |
   | +1 | javadoc | 64 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 70 | the patch passed |
   | +1 | compile | 465 | the patch passed |
   | +1 | javac | 465 | the patch passed |
   | -0 | checkstyle | 89 | hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 328 unchanged - 0 fixed = 330 total (was 328) |
   | +1 | mvnsite | 97 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 89 | 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) |
   | +1 | javadoc | 60 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 44 | hadoop-yarn-api in the patch failed. |
   | -1 | unit | 5354 | hadoop-yarn-server-resourcemanager in the patch failed. 
|
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 9824 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   |  |  Possible null pointer dereference of diags in 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.rememberTargetTransitionsAndStoreState(RMAppEvent,
 Object, RMAppState, RMAppState) on exception path  Dereferenced at 
RMAppImpl.java:diags in 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.rememberTargetTransitionsAndStoreState(RMAppEvent,
 Object, RMAppState, RMAppState) on exception path  Dereferenced at 
RMAppImpl.java:[line 1335] |
   | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
   |   | hadoop.yarn.server.resourcemanager.TestRMProxyUsersConf |
   |   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
   |   | hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions |
   |   | hadoop.yarn.server.resourcemanager.TestRMRestart |
   |   | hadoop.yarn.server.resourcemanager.TestRM |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-521/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/521 |
   | JIRA Issue | YARN-9065 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux fbcbfeb7ca5c 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0e45020 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-521/1/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-521/1/artifact/out/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-521/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-521/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-521/1/testReport/ |
   | Max. process+thread count | 944 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/h

[GitHub] hadoop-yetus commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-27 Thread GitBox
hadoop-yetus commented on issue #502: HDDS-919. Enable prometheus endpoints for 
Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-467874390
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/502 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/502 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-502/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone datanodes

2019-02-27 Thread GitBox
elek commented on issue #502: HDDS-919. Enable prometheus endpoints for Ozone 
datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-467863160
 
 
   > Thank You @elek for addressing comments.
   > There are many additional changes are done in ozone-default.xml, not 
related to this, can we do them as part of separate Jira. As those changes do 
not belong to this Jira.
   
   Thanks for the warning @bharatviswa504. It's a rebase error they shouldn't 
be there. Let me rebase the patch and remove the unrelated formatting changes...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek closed pull request #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-27 Thread GitBox
elek closed pull request #518: HDDS-1178. Healthy pipeline Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] elek commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.

2019-02-27 Thread GitBox
elek commented on issue #518: HDDS-1178. Healthy pipeline Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467861608
 
 
   Thanks @anuengineer the review and @bharatviswa504 the PR. I am pushing it 
to the trunk. 
   
   I checked the remaining unit tests and they are not related.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15038) Abstract MetadataStore in S3Guard into a common module.

2019-02-27 Thread wujinhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wujinhu updated HADOOP-15038:
-
Attachment: HADOOP-15038.001.patch
Status: Patch Available  (was: Open)

> Abstract MetadataStore in S3Guard into a common module.
> ---
>
> Key: HADOOP-15038
> URL: https://issues.apache.org/jira/browse/HADOOP-15038
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.1.2, 3.0.3, 2.9.2, 3.2.0
>Reporter: Genmao Yu
>Assignee: wujinhu
>Priority: Major
> Attachments: HADOOP-15038.001.patch
>
>
> Open this JIRA to discuss if we should move {{MetadataStore}} in {{S3Guard}} 
> into a common module. 
> Based on this work, other filesystem or object store can implement their own 
> metastore for optimization (known issues like consistency problem and 
> metadata operation performance). [~ste...@apache.org] and other guys have 
> done many base and great works in {{S3Guard}}. It is very helpful to start 
> work. I did some perf test in HADOOP-14098, and started related work for 
> Aliyun OSS.  Indeed there are still works to do for {{S3Guard}}, like 
> metadata cache inconsistent with S3 and so on. It also will be a problem for 
> other object store. However, we can do these works in parallel.
> [~ste...@apache.org] [~fabbri] [~drankye] Any suggestion is appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16140) Add emptyTrash option to purge trash immediately

2019-02-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16779180#comment-16779180
 ] 

Hadoop QA commented on HADOOP-16140:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 95 unchanged - 0 fixed = 97 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
1s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16140 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960331/HADOOP-14200.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4c8e7dee4186 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8c30114 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15986/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15986/testReport/ |
| Max. process+thread count | 1389 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15986/console |
| Powe

[GitHub] hunshenshi closed pull request #522: YARN-9065

2019-02-27 Thread GitBox
hunshenshi closed pull request #522: YARN-9065
URL: https://github.com/apache/hadoop/pull/522
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hunshenshi opened a new pull request #522: YARN-9065

2019-02-27 Thread GitBox
hunshenshi opened a new pull request #522: YARN-9065
URL: https://github.com/apache/hadoop/pull/522
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hunshenshi closed pull request #521: YARN-9065

2019-02-27 Thread GitBox
hunshenshi closed pull request #521: YARN-9065
URL: https://github.com/apache/hadoop/pull/521
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >