[jira] [Commented] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806720#comment-15806720
 ] 

Hadoop QA commented on HADOOP-13953:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 11 unchanged - 1 fixed = 11 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13953 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846151/HADOOP-13953.04.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 383910e40f7f 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a59df15 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11389/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11389/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make FTPFileSystem's data connection mode and transfer mode configurable
> 
>
> Key: HADOOP-13953
> URL: https://issues.apache.org/jira/browse/HADOOP-13953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>

[jira] [Updated] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13953:
---
Attachment: HADOOP-13953.04.patch

Patch 4 for the test failure. IMO those should be skipped rather than added to 
{{CommonConfigurationKeys}} - the existing 2 defined there aren't used anywhere 
either.

> Make FTPFileSystem's data connection mode and transfer mode configurable
> 
>
> Key: HADOOP-13953
> URL: https://issues.apache.org/jira/browse/HADOOP-13953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.22.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13953.01.patch, HADOOP-13953.02.patch, 
> HADOOP-13953.03.patch, HADOOP-13953.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13806) MetricsSourceBuilder doesn't set hasAtMetric when the Source object is reregistered

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13806:

Target Version/s: 3.0.0-alpha2  (was: 2.8.0, 3.0.0-alpha2)

> MetricsSourceBuilder doesn't set hasAtMetric when the Source object is 
> reregistered
> ---
>
> Key: HADOOP-13806
> URL: https://issues.apache.org/jira/browse/HADOOP-13806
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: HADOOP-13806.001.patch
>
>
> We are trying to register a Object (instance of MetricsObject) which has 
> already been registered and unregistered. During this operation we got an 
> exception during MetricsSourceBuilder.build()
> {code}
> org.apache.hadoop.metrics2.MetricsException: No valid @Metric annotation 
> found.
>   at 
> org.apache.hadoop.metrics2.lib.MetricsSourceBuilder.build(MetricsSourceBuilder.java:83)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:225)
>   at 
> org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testUnregisterSource(TestMetricsSystemImpl.java:417)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13747) Use LongAdder for more efficient metrics tracking

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13747:

Target Version/s: 3.0.0-alpha2  (was: 2.8.0, 3.0.0-alpha2)

> Use LongAdder for more efficient metrics tracking
> -
>
> Key: HADOOP-13747
> URL: https://issues.apache.org/jira/browse/HADOOP-13747
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Zhe Zhang
>Assignee: Erik Krogen
> Attachments: HADOOP-13747.patch, benchmark_results
>
>
> Currently many metrics, including {{RpcMetrics}} and {{RpcDetailedMetrics}}, 
> use a synchronized counter to be updated by all handler threads (multiple 
> hundreds in large production clusters). As [~andrew.wang] suggested, it'd be 
> more efficient to use the [LongAdder | 
> http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/jsr166e/LongAdder.java?view=co]
>  library which dynamically create intermediate-result variables.
> Assigning to [~xkrogen] who has already done some investigation on this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12630) Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons causes misaligned memory access coredumps

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12630:

Target Version/s: 2.7.4, 2.6.6  (was: 2.8.0, 2.7.4, 2.6.6)

> Misuse of sun.misc.Unsafe by org.apache.hadoop.io.FastByteComparisons causes 
> misaligned memory access coredumps
> ---
>
> Key: HADOOP-12630
> URL: https://issues.apache.org/jira/browse/HADOOP-12630
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha1
> Environment: Solaris SPARC
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>
> Misuse of sun.misc.unsafe by {{org.apache.hadoop.io.FastByteComparisons}} 
> causes misaligned memory accesses and results in coredumps. Stack traces 
> below:
> {code}
> hadoop-tools/hadoop-gridmix/core
>  --- called from signal handler with signal 10 (SIGBUS) ---
>  7717fa40 Unsafe_GetLong (18c000, 7e2fd6d8, 0, 19, 
> 775d4be0, 10018c000) + 158
>  70810dcc * sun/misc/Unsafe.getLong(Ljava/lang/Object;J)J+-30004
>  70810d70 * sun/misc/Unsafe.getLong(Ljava/lang/Object;J)J+0
>  70806d58 * 
> org/apache/hadoop/io/FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo([BII[BII)I+91
>  (line 405)
>  70806cb4 * 
> org/apache/hadoop/io/FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(Ljava/lang/Object;IILjava/lang/Object;II)I+16
>  (line 264)
>  7080783c * 
> org/apache/hadoop/io/FastByteComparisons.compareTo([BII[BII)I+11 (line 92)
>  70806cb4 * 
> org/apache/hadoop/io/WritableComparator.compareBytes([BII[BII)I+8 (line 376)
>  70806cb4 * 
> org/apache/hadoop/mapred/gridmix/GridmixRecord$Comparator.compare([BII[BII)I+61
>  (line 522)
>  70806cb4 * 
> org/apache/hadoop/mapred/gridmix/TestGridmixRecord.binSortTest(Lorg/apache/hadoop/mapred/gridmix/GridmixRecord;Lorg/apache/hadoop/mapred/gridmix/GridmixRecord;IILorg/apache/hadoop/io/WritableComparator;)V+280
>  (line 268)
>  70806f44 * 
> org/apache/hadoop/mapred/gridmix/TestGridmixRecord.testBaseRecord()V+57 (line 
> 482)
> {code}
> This also causes {{hadoop-mapreduce-project/hadoop-mapreduce-examples/core}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806472#comment-15806472
 ] 

Hadoop QA commented on HADOOP-13885:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13885 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846108/HADOOP-13885.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f35eb02f23b8 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71a4acf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11388/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11388/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement getLinkTarget for ViewFileSystem
> --
>
> Key: HADOOP-13885
> URL: https://issues.apache.org/jira/browse/HADOOP-13885
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13885.01.patch, HADOOP-13885.02.patch
>
>
> ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view 
> filesystem is used to resolve the symbol

[jira] [Commented] (HADOOP-13929) ADLS should not check in contract-test-options.xml

2017-01-06 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806425#comment-15806425
 ] 

John Zhuge commented on HADOOP-13929:
-

Thanks both of you for the input:
* No go on {{fs.contract.test.enabled}}
* Globally git ignore auth-keys.xml and azure-auth-keys.xml


> ADLS should not check in contract-test-options.xml
> --
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806403#comment-15806403
 ] 

Mingliang Liu commented on HADOOP-13345:


Yeah, the two points both make sense to me. I'll get my hand dirty by working 
on that early; the code might not go to the feature branch before merge to 
trunk. For the 2nd point, I was thinking of rename, which uses the copy 
operation. But I was not sure GET after copy is consistent. Will check that as 
well.

> S3Guard: Improved Consistency for S3A
> -
>
> Key: HADOOP-13345
> URL: https://issues.apache.org/jira/browse/HADOOP-13345
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13345.prototype1.patch, 
> S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, 
> S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch
>
>
> This issue proposes S3Guard, a new feature of S3A, to provide an option for a 
> stronger consistency model than what is currently offered.  The solution 
> coordinates with a strongly consistent external store to resolve 
> inconsistencies caused by the S3 eventual consistency model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806381#comment-15806381
 ] 

Mingliang Liu commented on HADOOP-13336:


+1

Nice work!

I'll hold on commit in 3 days if [~cnauroth], [~mackrorysd] and [~dlaurence] 
have further comments. Checkstyle warnings seem related.

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch, 
> HADOOP-13336-HADOOP-13345-004.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13839) Fix outdated tracing documentation

2017-01-06 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806371#comment-15806371
 ] 

Colin P. McCabe commented on HADOOP-13839:
--

Thanks, [~elek] and [~iwasakims]!

> Fix outdated tracing documentation
> --
>
> Key: HADOOP-13839
> URL: https://issues.apache.org/jira/browse/HADOOP-13839
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, tracing
>Affects Versions: 2.7.3
>Reporter: Masatake Iwasaki
>Assignee: Elek, Marton
>Priority: Minor
> Fix For: 2.7.4
>
> Attachments: HADOOP-13839-branch-2.7.001.patch
>
>
> Sample code in tracing doc is based on older version of SpanReceiverHost. The 
> doc in branch-2 and trunk seems to be good.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806367#comment-15806367
 ] 

Hadoop QA commented on HADOOP-13908:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13908 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846118/HADOOP-13908-HADOOP-13345.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 898a0c834393 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / a1b47db |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11387/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11387/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13

[jira] [Commented] (HADOOP-13438) Optimize IPC server protobuf decoding

2017-01-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806207#comment-15806207
 ] 

Andrew Wang commented on HADOOP-13438:
--

I used the typo list and exclude list for alpha1 when doing manual checking, 
but it's not used by the improved {{versions}} script I'm using for alpha2. You 
still need to supply the fixup file that the {{versions}} script uses to 
reconcile git with JIRA, which is per-branch since it fixes up specific hashes.

It should be possible to add cherry-pick support to automatically apply fixup 
information to backports, but IMO it's safer to just have a fixup file per 
branch.

> Optimize IPC server protobuf decoding
> -
>
> Key: HADOOP-13438
> URL: https://issues.apache.org/jira/browse/HADOOP-13438
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13438.patch, HADOOP-13438.patch.1
>
>
> The current use of the protobuf API uses an expensive code path.  The builder 
> uses the parser to instantiate a message, then copies the message into the 
> builder.  The parser is creating multi-layered internally buffering streams 
> that cause excessive byte[] allocations.
> Using the parser directly with a coded input stream backed by the byte[] from 
> the wire will take a fast-path straight to the pb message's ctor.  
> Substantially less garbage is generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806195#comment-15806195
 ] 

Chris Nauroth commented on HADOOP-13908:


[~liuml07], thank you for the update.  I agree with proceeding with patch 005 
and possibly optimizing the table probe logic in scope of a different JIRA 
issue.  I will hold off committing in case Steve wants to respond again, since 
he made the earlier comment about preferring to avoid {{table.describe()}}.

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13908-HADOOP-13345.001.patch, HADOOP-13908-HADOOP-13345.002.patch, 
> HADOOP-13908-HADOOP-13345.002.patch, HADOOP-13908-HADOOP-13345.003.patch, 
> HADOOP-13908-HADOOP-13345.004.patch, HADOOP-13908-HADOOP-13345.005.patch
>
>
> This was based on discussion in [HADOOP-13455]. Though we should not create 
> table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
> still have to get the existing table in 
> {{DynamoDBMetadataStore#initialize()}} and wait for its becoming active, 
> before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806095#comment-15806095
 ] 

Mingliang Liu edited comment on HADOOP-13908 at 1/6/17 11:10 PM:
-

Chris and I have checked the test failure of v4 patch. We have not spotted the 
root cause.

Comparing with the v3 patch (now rebased from feature branch as v5), it firstly 
uses the getItem() to probe; there may be some unknown internal state of table 
in client/sdk after that request fails. That part we don't quite know yet. I 
tested the v5 patch multiple times (>10) and no "requested resource not found" 
failure happened.

For this problem ie _initializing the table correctly_ vs. _using a lightweight 
approach to probe the table status_. During my ~1mo PTO from next week, I'd 
stick with the v5 patch, and address the latter later. However, if any help 
comes up, I will be very appreciated and help debugging/reviewing/testing. Ping 
[~ste...@apache.org] and [~rajesh.balamohan].


was (Author: liuml07):
Chris and I have checked the test failure of v4 patch. We have not spotted the 
failure.

Comparing with the v3 patch (now rebased from feature branch as v5), it firstly 
uses the getItem() to probe; there may be some unknown internal state of table 
in client/sdk after that request fails. That part we don't quite know yet. I 
tested the v5 patch multiple times (>10) and no failure happened.

For this problem ie _initializing the table correctly_ vs. _using a lightweight 
approach to probe the table status_. During my ~1mo PTO from next week, I'd 
stick with the v5 patch, and address the latter later. However, if any help 
comes up, I will be very appreciated and help debugging/reviewing/testing. Ping 
[~ste...@apache.org] and [~rajesh.balamohan].

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13908-HADOOP-13345.001.patch, HADOOP-13908-HADOOP-13345.002.patch, 
> HADOOP-13908-HADOOP-13345.002.patch, HADOOP-13908-HADOOP-13345.003.patch, 
> HADOOP-13908-HADOOP-13345.004.patch, HADOOP-13908-HADOOP-13345.005.patch
>
>
> This was based on discussion in [HADOOP-13455]. Though we should not create 
> table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
> still have to get the existing table in 
> {{DynamoDBMetadataStore#initialize()}} and wait for its becoming active, 
> before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13908:
---
Attachment: HADOOP-13908-HADOOP-13345.005.patch

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13908-HADOOP-13345.001.patch, HADOOP-13908-HADOOP-13345.002.patch, 
> HADOOP-13908-HADOOP-13345.002.patch, HADOOP-13908-HADOOP-13345.003.patch, 
> HADOOP-13908-HADOOP-13345.004.patch, HADOOP-13908-HADOOP-13345.005.patch
>
>
> This was based on discussion in [HADOOP-13455]. Though we should not create 
> table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
> still have to get the existing table in 
> {{DynamoDBMetadataStore#initialize()}} and wait for its becoming active, 
> before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15806095#comment-15806095
 ] 

Mingliang Liu commented on HADOOP-13908:


Chris and I have checked the test failure of v4 patch. We have not spotted the 
failure.

Comparing with the v3 patch (now rebased from feature branch as v5), it firstly 
uses the getItem() to probe; there may be some unknown internal state of table 
in client/sdk after that request fails. That part we don't quite know yet. I 
tested the v5 patch multiple times (>10) and no failure happened.

For this problem ie _initializing the table correctly_ vs. _using a lightweight 
approach to probe the table status_. During my ~1mo PTO from next week, I'd 
stick with the v5 patch, and address the latter later. However, if any help 
comes up, I will be very appreciated and help debugging/reviewing/testing. Ping 
[~ste...@apache.org] and [~rajesh.balamohan].

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13908-HADOOP-13345.001.patch, HADOOP-13908-HADOOP-13345.002.patch, 
> HADOOP-13908-HADOOP-13345.002.patch, HADOOP-13908-HADOOP-13345.003.patch, 
> HADOOP-13908-HADOOP-13345.004.patch
>
>
> This was based on discussion in [HADOOP-13455]. Though we should not create 
> table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
> still have to get the existing table in 
> {{DynamoDBMetadataStore#initialize()}} and wait for its becoming active, 
> before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem

2017-01-06 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13885:

Attachment: HADOOP-13885.02.patch

Thanks for the review [~andrew.wang]. Can you please take a look at the updated 
patch ?
Attached v02 patch to address the following
* Made {{ViewFileSystem#getLinkTarget}} throw NotInMountPointException when 
internal mount point resolve fails and throw other exceptions including FNFE 
from target filesystem as is.
* Updated ViewFileSystemBaseTest to verify both normal and relative symbolic 
links, and added assume to skip irrelevant tests.


> Implement getLinkTarget for ViewFileSystem
> --
>
> Key: HADOOP-13885
> URL: https://issues.apache.org/jira/browse/HADOOP-13885
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13885.01.patch, HADOOP-13885.02.patch
>
>
> ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view 
> filesystem is used to resolve the symbolic links, the default FileSystem 
> implementation throws UnsupportedOperationException.
> The proposal is to define getLinkTarget() for ViewFileSystem and invoke the 
> target FileSystem for resolving the symbolic links. Path thus returned is 
> preferred to be a viewfs qualified path, so that it can be used again on the 
> ViewFileSystem handle. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805979#comment-15805979
 ] 

Hadoop QA commented on HADOOP-13953:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 6 unchanged - 1 fixed = 6 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13953 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846085/HADOOP-13953.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux a94f4161e2f4 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11386/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11386/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11386/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make FTPFileSystem's data connection mode and transfer mode configurable

[jira] [Commented] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem

2017-01-06 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805955#comment-15805955
 ] 

Manoj Govindassamy commented on HADOOP-13885:
-

* sure, will make getLinkTarget() return resolved path as is.

* There are few other places like getDefaultBlocksize(), 
getDefaultReplication(), getServerDefaults(), getTrashRoot() which has try 
block surrounding both viewfs internal resolve and the target filesystem API 
call, catch FNFE/Exception and re-throw as NotInMountpointException. But all 
these places seem to be fine as the target filesystem calls in these APIs don't 
throw any FNFE. So, we will not be masking any FNFE from target filesystem in 
these APIs. Will anyway take a closer look and fix the needed ones as a 
separate jira.

> Implement getLinkTarget for ViewFileSystem
> --
>
> Key: HADOOP-13885
> URL: https://issues.apache.org/jira/browse/HADOOP-13885
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13885.01.patch
>
>
> ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view 
> filesystem is used to resolve the symbolic links, the default FileSystem 
> implementation throws UnsupportedOperationException.
> The proposal is to define getLinkTarget() for ViewFileSystem and invoke the 
> target FileSystem for resolving the symbolic links. Path thus returned is 
> preferred to be a viewfs qualified path, so that it can be used again on the 
> ViewFileSystem handle. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805942#comment-15805942
 ] 

Mingliang Liu commented on HADOOP-13650:


{quote}
In the current DynamoDBMetadataStore#initialize(Configuration), such case will 
raise Exception if we do not explicitly specify s3a URI in the CLI because it 
can not create S3AFileSystem.
{quote}

That's right. For the both options I posted in the parent thread: 
https://issues.apache.org/jira/browse/HADOOP-13345?focusedCommentId=15802751&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15802751
 we will have to change the DynamoDBMetadataStore accordingly. If you also 
prefer the option 2 (both here and parent thread comment) I can prepare a patch 
for that. Certainly this can also be addressed later. Thanks!

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch, 
> HADOOP-13650-HADOOP-13345.001.patch, HADOOP-13650-HADOOP-13345.002.patch, 
> HADOOP-13650-HADOOP-13345.003.patch, HADOOP-13650-HADOOP-13345.004.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805852#comment-15805852
 ] 

Hadoop QA commented on HADOOP-13877:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 29 
new + 0 unchanged - 0 fixed = 29 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846083/HADOOP-13877-HADOOP-13345.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9824dd43385c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / a1b47db |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11385/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11385/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11385/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
> 

[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805850#comment-15805850
 ] 

Aaron Fabbri commented on HADOOP-13336:
---

+1 (binding only on HADOOP-13345 feature branch).

Excited about this feature.  Also happy the implementation turned out to be 
pretty simple.

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch, 
> HADOOP-13336-HADOOP-13345-004.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13885) Implement getLinkTarget for ViewFileSystem

2017-01-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805841#comment-15805841
 ] 

Andrew Wang commented on HADOOP-13885:
--

Hi Manoj,

bq. I was following the getFileStatus() to model, to qualify returned paths. 
Let me know if this raw target filesystem resolved path is ok and no more 
viewfs qualification is needed.

I'm pretty sure link targets are supposed to be returned as is, per the 
FileContext documentation. It's similar to {{man 2 readlink}}, which returns 
the contents directly.

bq. In the attached patch v01, try block surrounds both Case 1 and Case 2 
together. I will fix this in the next patch revision so that only Case 1 
returns NotInMountPointException and Case 2 returns FNFE.

SGTM! Are there other places too where this should be changed?

> Implement getLinkTarget for ViewFileSystem
> --
>
> Key: HADOOP-13885
> URL: https://issues.apache.org/jira/browse/HADOOP-13885
> Project: Hadoop Common
>  Issue Type: Task
>  Components: viewfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13885.01.patch
>
>
> ViewFileSystem doesn't override FileSystem#getLinkTarget(). So, when view 
> filesystem is used to resolve the symbolic links, the default FileSystem 
> implementation throws UnsupportedOperationException.
> The proposal is to define getLinkTarget() for ViewFileSystem and invoke the 
> target FileSystem for resolving the symbolic links. Path thus returned is 
> preferred to be a viewfs qualified path, so that it can be used again on the 
> ViewFileSystem handle. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805823#comment-15805823
 ] 

Hadoop QA commented on HADOOP-13336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 10 
new + 34 unchanged - 1 fixed = 44 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846081/HADOOP-13336-HADOOP-13345-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 787ba5340c4c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / a1b47db |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11384/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11384/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11384/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11384/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
>  

[jira] [Commented] (HADOOP-13914) s3guard: improve S3AFileStatus#isEmptyDirectory handling

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805814#comment-15805814
 ] 

Steve Loughran commented on HADOOP-13914:
-

Good writeup.

I had a talk with mingliang & rajesh about this.

We only want that dir as an optimisation of followon work in s3aFS, so that if 
you get a delete(path) you can do a getFileStatus, and, if status=directory, 
see if it is empty (so skip the need for recursive=true) without another round 
trip.

with s3guard you don't need that caching of state. It can be be done on demand, 
only in those few cases where we actually need to know about it...which pushes 
for it being something that the metadatastore can work out on demand. We would 
need to document that the status field is only valid without an MD store




> s3guard: improve S3AFileStatus#isEmptyDirectory handling
> 
>
> Key: HADOOP-13914
> URL: https://issues.apache.org/jira/browse/HADOOP-13914
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Mingliang Liu
> Attachments: s3guard-empty-dirs.md, test-only-HADOOP-13914.patch
>
>
> As discussed in HADOOP-13449, proper support for the isEmptyDirectory() flag 
> stored in S3AFileStatus is missing from DynamoDBMetadataStore.
> The approach taken by LocalMetadataStore is not suitable for the DynamoDB 
> implementation, and also sacrifices good code separation to minimize 
> S3AFileSystem changes pre-merge to trunk.
> I will attach a design doc that attempts to clearly explain the problem and 
> preferred solution.  I suggest we do this work after merging the HADOOP-13345 
> branch to trunk, but am open to suggestions.
> I can also attach a patch of a integration test that exercises the missing 
> case and demonstrates a failure with DynamoDBMetadataStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13953:
---
Attachment: HADOOP-13953.03.patch

Thanks for the review [~jojochuang]. Patch 3 to address the comments.

Didn't know those ftp configs are in core-default.xml, added there. Also looked 
for docs but found none.

> Make FTPFileSystem's data connection mode and transfer mode configurable
> 
>
> Key: HADOOP-13953
> URL: https://issues.apache.org/jira/browse/HADOOP-13953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.22.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13953.01.patch, HADOOP-13953.02.patch, 
> HADOOP-13953.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805797#comment-15805797
 ] 

Hadoop QA commented on HADOOP-13336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 9 
new + 34 unchanged - 1 fixed = 43 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846079/HADOOP-13336-HADOOP-13345-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ae26dcd6d480 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / a1b47db |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11383/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11383/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11383/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11383/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
>   

[jira] [Updated] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-06 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13877:
---
Attachment: HADOOP-13877-HADOOP-13345.002.patch

In current unit test, the cases that are not using default bucket name will set 
the config again and initialize a new metadata store. You're right about making 
createContract() more flexible and clear about table/bucket binding.

I had a look at the test code, and wondering if we can make it clear in a 
different approach: 1) we can avoid creating contract multiple times if we make 
{{contract}} protected 2) we can remove the assumption that the default table 
name is always the bucket name 3) we can also respect the test configuration 
file as the v1 patch does.
I attach a simple patch for that idea. Creating a new contract is not heavy 
anyway as the s3 file system is mocked. I also +1 the v1 patch; you can skip 
the v2 patch unless interested.

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch, 
> HADOOP-13877-HADOOP-13345.002.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13589) S3Guard: Allow execution of all S3A integration tests with S3Guard enabled.

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805782#comment-15805782
 ] 

Steve Loughran commented on HADOOP-13589:
-

now we can do per-bucket config, this should just be a matter of declaring 
which FS to test against; use one with s3guard, and that's what you get

> S3Guard: Allow execution of all S3A integration tests with S3Guard enabled.
> ---
>
> Key: HADOOP-13589
> URL: https://issues.apache.org/jira/browse/HADOOP-13589
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Steve Loughran
>
> With S3Guard enabled, S3A must continue to be functionally correct.  The best 
> way to verify this is to execute our existing S3A integration tests in a mode 
> with S3A enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Attachment: HADOOP-13336-HADOOP-13345-004.patch

patch 004; address Aaron's comments

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch, 
> HADOOP-13336-HADOOP-13345-004.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Status: Patch Available  (was: Open)

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch, 
> HADOOP-13336-HADOOP-13345-004.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Status: Open  (was: Patch Available)

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805774#comment-15805774
 ] 

Steve Loughran commented on HADOOP-13336:
-

h2. "fs.s3a.impl". is the only property you can't stamp on. I'll extend the 
test to make sure that the metadatatore one survivces

h2. {{value==null}}.

I don't think so either, but some of the underlying code hinted at it. I'll cut 
it, as it shouldn't happen.
thanks for the feedback; I'll do an iteration on it.



regarding the log4.properties, I had left that out the stuff I was checking in, 
but had got my git diff wrong. Sorry.


> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805774#comment-15805774
 ] 

Steve Loughran edited comment on HADOOP-13336 at 1/6/17 9:16 PM:
-

"fs.s3a.impl". is the only property you can't stamp on. I'll extend the test to 
make sure that the metadatatore one survivces

{{value==null}}.

I don't think so either, but some of the underlying code hinted at it. I'll cut 
it, as it shouldn't happen.
thanks for the feedback; I'll do an iteration on it.



regarding the log4.properties, I had left that out the stuff I was checking in, 
but had got my git diff wrong. Sorry.



was (Author: ste...@apache.org):
h2. "fs.s3a.impl". is the only property you can't stamp on. I'll extend the 
test to make sure that the metadatatore one survivces

h2. {{value==null}}.

I don't think so either, but some of the underlying code hinted at it. I'll cut 
it, as it shouldn't happen.
thanks for the feedback; I'll do an iteration on it.



regarding the log4.properties, I had left that out the stuff I was checking in, 
but had got my git diff wrong. Sorry.


> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805771#comment-15805771
 ] 

Steve Loughran commented on HADOOP-13877:
-

OK

+1

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Attachment: HADOOP-13336-HADOOP-13345-003.patch

patch 003. Fix javadoc. S3AFS declares whether or not it has a metadata store; 
use that value in the FS and tests


> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Status: Patch Available  (was: Open)

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch, HADOOP-13336-HADOOP-13345-003.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13931) S3AGuard: Use BatchWriteItem in DynamoDBMetadataStore#put()

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805702#comment-15805702
 ] 

Mingliang Liu commented on HADOOP-13931:


Thanks [~cnauroth] for your review and commit, and [~rajesh.balamohan] for 
reporting this.

> S3AGuard: Use BatchWriteItem in DynamoDBMetadataStore#put()
> ---
>
> Key: HADOOP-13931
> URL: https://issues.apache.org/jira/browse/HADOOP-13931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Rajesh Balamohan
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13931-HADOOP-13345.000.patch, 
> HADOOP-13931-HADOOP-13345.001.patch
>
>
> Using {{batchWriteItem}} might be performant in 
> {{DynamoDBMetadataStore#put(DirListingMetadata meta)}} and  
> {{DynamoDBMetadataStore#put(PathMetadata meta)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS should not check in contract-test-options.xml

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805670#comment-15805670
 ] 

Steve Loughran commented on HADOOP-13929:
-

I understand the enable/disable thing. Essentially we have an implicit 
enable/disable flag: if the path is there, they run.

regarding a global flag, no, -1 to that. I often run the AWS s3? tests with s3a 
defined, s3n and s3 undefined. That way: less tests, faster failures.

Per filesystem? Possibly

> ADLS should not check in contract-test-options.xml
> --
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-06 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805631#comment-15805631
 ] 

Aaron Fabbri commented on HADOOP-13877:
---

Agreed, but different test cases use different bucket names.

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805590#comment-15805590
 ] 

Aaron Fabbri commented on HADOOP-13336:
---

+1 (non-binding) once you remove the change of debug log level at end of patch.

I like this approach.  I had thought about this a bit before you started work 
and was thinking of having a "flatten" (a.k.a. propagate) function as you do 
here.  Seems more efficient at runtime and easier to debug.  I like the use of 
the Configuration source feature too.

It looks like your omission of .impl keys still works with 
fs.s3a.metadatastore.impl, since {{"impl".equals("metadatastore.impl") == 
false}}.  (You compare the whole key, not just the suffix.)  This is good 
because we'd like to be able to enable/disable s3guard on a per-bucket basis.

Other patch comments:

{code}
+  public static Configuration propagateBucketOptions(Configuration source,
+  String bucket) {
+

+for (Map.Entry entry : source) {
+  final String key = entry.getKey();
+  // get the (unexpanded) value.
+  final String value = entry.getValue();
+  if (!key.startsWith(bucketPrefix)
+  || bucketPrefix.equals(key)
+  || value == null) {
+continue;
+  }
{code}

Was curious about the {{value == null}} part.. Does that ever happen?  Anyhow, 
seems safe to include the check.

Minor nit in the docs:

{code}
+Different S3 buckets can be accessed with S3A client configurations.
{code}

Should this read "can be accessed with different S3A client configurations"?

{code}
--- a/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties
+++ b/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties
@@ -21,7 +21,7 @@ log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
  
 # for debugging low level S3a operations, uncomment these lines
 # Log all S3A classes
-#log4j.logger.org.apache.hadoop.fs.s3a=DEBUG
+log4j.logger.org.apache.hadoop.fs.s3a=DEBUG
  
 # Log S3Guard classes
 #log4j.logger.org.apache.hadoop.fs.s3a.s3guard=DEBUG
{code}
Was this intentional?

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805557#comment-15805557
 ] 

Wei-Chiu Chuang commented on HADOOP-13953:
--

Thanks [~xiaochen] for filing the bug report and contributing the patch.
Overall looks good to me. Two nit:
{code}
LOG.info("Cannot parse the value for " + FS_FTP_DATA_CONNECTION_MODE
+ ": " + mode + ". Using default.");
{code}
Shouldn't it be a WARN log?

It would also be nice to add the valid configuration values, and default values 
into core-default.xml

> Make FTPFileSystem's data connection mode and transfer mode configurable
> 
>
> Key: HADOOP-13953
> URL: https://issues.apache.org/jira/browse/HADOOP-13953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.22.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13953.01.patch, HADOOP-13953.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Status: Open  (was: Patch Available)

cancel for javadoc.

One other thing I'm going to change: cut 
{{isNullMetadataStoreConfigured(Configuration conf)}}. You need to know the 
bucket name here. Best to ask the FS itself whether its there.

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13453) S3Guard: Instrument new functionality with Hadoop metrics.

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805499#comment-15805499
 ] 

Steve Loughran commented on HADOOP-13453:
-

# these should all go into org.apache.hadoop.fs.s3a.Statistic & Instrumentation
# tracking total & ongoing dynamoDB request rates could be useful, as it will 
help identify when you've over/under provisioned your DDB.
# include stats on detected inconsistencies
# include in {{S3AFileSystem.toString}}.

 

> S3Guard: Instrument new functionality with Hadoop metrics.
> --
>
> Key: HADOOP-13453
> URL: https://issues.apache.org/jira/browse/HADOOP-13453
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Ai Deng
>
> Provide Hadoop metrics showing operational details of the S3Guard 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13877) S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805387#comment-15805387
 ] 

Steve Loughran commented on HADOOP-13877:
-

this looks simpler than the other patch

> S3Guard: fix TestDynamoDBMetadataStore when fs.s3a.s3guard.ddb.table is set
> ---
>
> Key: HADOOP-13877
> URL: https://issues.apache.org/jira/browse/HADOOP-13877
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13877-HADOOP-13345.001.patch
>
>
> I see a couple of failures in the DynamoDB MetadataStore unit test when I set 
> {{fs.s3a.s3guard.ddb.table}} in my test/resources/core-site.xml.
> I have a fix already, so I'll take this JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13936) S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805379#comment-15805379
 ] 

Steve Loughran commented on HADOOP-13936:
-

Smaller batch sizes? It could cause throttling problems

> S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation
> -
>
> Key: HADOOP-13936
> URL: https://issues.apache.org/jira/browse/HADOOP-13936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> As a part of {{S3AFileSystem.delete}} operation {{innerDelete}} is invoked, 
> which deletes keys from S3 in batches (default is 1000). But DynamoDB is 
> updated only at the end of this operation. This can cause issues when 
> deleting large number of keys. 
> E.g, it is possible to get exception after deleting 1000 keys and in such 
> cases dynamoDB would not be updated. This can cause DynamoDB to go out of 
> sync. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805360#comment-15805360
 ] 

Hadoop QA commented on HADOOP-13336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 7 
new + 33 unchanged - 0 fixed = 40 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846061/HADOOP-13336-HADOOP-13345-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 785d1ba1db19 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / a1b47db |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11382/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11382/artifact/patchprocess/whitespace-eol.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11382/artifact/patchprocess/patch-javadoc-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11382/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11382/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generat

[jira] [Commented] (HADOOP-11694) Über-jira: S3a phase II: robustness, scale and performance

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805339#comment-15805339
 ] 

Steve Loughran commented on HADOOP-11694:
-

Note that I've not quite closed it, we have that thread pool one for me to look 
at...

> Über-jira: S3a phase II: robustness, scale and performance
> --
>
> Key: HADOOP-11694
> URL: https://issues.apache.org/jira/browse/HADOOP-11694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> HADOOP-11571 covered the core s3a bugs surfacing in Hadoop-2.6 & other 
> enhancements to improve S3 (performance, proxy, custom endpoints)
> This JIRA covers post-2.7 issues and enhancements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805340#comment-15805340
 ] 

Steve Loughran commented on HADOOP-13826:
-

thanks, I'll look at this now

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch, 
> HADOOP-13826.003.patch, HADOOP-13826.004.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-11694) Über-jira: S3a phase II: robustness, scale and performance

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11694.
-
   Resolution: Fixed
Fix Version/s: 2.8.0

I am declaring this JIRA done: the key stuff for Hadoop 2.8 is all in!

I've moved some outstanding stuff to the Phase III JIRA, but the big bits: high 
performance reads, writes directory listings, they are all in.

There's one thing we may actually want to pull up. HADOOP-13336 -in which you 
can specify per-bucket config. We need this for s3guard, but it's also useful 
when trying to do work between buckets with different logins. Putting it in 2.8 
will get it out the door faster

> Über-jira: S3a phase II: robustness, scale and performance
> --
>
> Key: HADOOP-11694
> URL: https://issues.apache.org/jira/browse/HADOOP-11694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> HADOOP-11571 covered the core s3a bugs surfacing in Hadoop-2.6 & other 
> enhancements to improve S3 (performance, proxy, custom endpoints)
> This JIRA covers post-2.7 issues and enhancements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11417) review filesystem seek logic, clarify/confirm spec, test & fix compliance

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11417:

Target Version/s: 2.9.0  (was: 2.8.0)

> review filesystem seek logic, clarify/confirm spec, test & fix compliance
> -
>
> Key: HADOOP-11417
> URL: https://issues.apache.org/jira/browse/HADOOP-11417
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs, fs/s3, fs/swift
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> HADOOP-11270 implies there's a diff in the way HDFS seeks and the object 
> stores on the action {{seek(len(file))}}
> # review what HDFS does, add contract test to exactly demonstrate HDFS 
> behaviour.
> # ensure FS spec is consistent with this
> # test/audit all supported filesystems to verify consistent behaviour
> # fix where appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13276) S3a operations keep retrying if the password is wrong

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13276:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> S3a operations keep retrying if the password is wrong
> -
>
> Key: HADOOP-13276
> URL: https://issues.apache.org/jira/browse/HADOOP-13276
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Minor
>
> If you do a {{hadoop fs}} command with the AWS account valid but the password 
> wrong, it takes a while to timeout, because of retries happening underneath.
> Eventually it gives up, but failing fast would be better.
> # maybe: check the password length and fail if it is not the right length (is 
> there a standard one? Or at least a range?)
> # consider a retry policy which fails faster on signature failures/403 
> responses



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13713) ITestS3AContractRootDir.testRmEmptyRootDirNonRecursive failing intermittently

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13713:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> ITestS3AContractRootDir.testRmEmptyRootDirNonRecursive failing intermittently
> -
>
> Key: HADOOP-13713
> URL: https://issues.apache.org/jira/browse/HADOOP-13713
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: s3 ireland
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> intermittent failure of 
> {{ITestS3AContractRootDir.testRmEmptyRootDirNonRecursive}} surfacing in 
> HADOOP-12774 test run.
> This is a test which came in with HADOOP-12977, one test which deletes all 
> children of the root dir, then verifies that they are gone. Although it 
> tested happily during development, the sightings of two transient failures 
> before it worked implied that it's either got some race condition with 
> previous tests and/or maven build, or we are seeing listing inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13362) DefaultMetricsSystem leaks the source name when a source unregisters

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13362:

Priority: Blocker  (was: Critical)

> DefaultMetricsSystem leaks the source name when a source unregisters
> 
>
> Key: HADOOP-13362
> URL: https://issues.apache.org/jira/browse/HADOOP-13362
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Junping Du
>Priority: Blocker
> Fix For: 2.7.4
>
> Attachments: HADOOP-13362-branch-2.7.patch
>
>
> Ran across a nodemanager that was spending most of its time in GC.  Upon 
> examination of the heap most of the memory was going to the map of names in 
> org.apache.hadoop.metrics2.lib.UniqueNames.  In this case the map had almost 
> 2 million entries.  Looking at a few of the map showed entries like 
> "ContainerResource_container_e01_1459548490386_8560138_01_002020", 
> "ContainerResource_container_e01_1459548490386_2378745_01_000410", etc.
> Looks like the ContainerMetrics for each container will cause a unique name 
> to be registered with UniqueNames and the name will never be unregistered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.

2017-01-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805318#comment-15805318
 ] 

Steve Loughran commented on HADOOP-13402:
-

HDFS's rename(src, dest, options) doesn't let you rename onto a dir unless that 
destination is an empty dir; in which case it deletes the directory and copies 
the source under the parent with that directory's name.



> S3A should allow renaming to a pre-existing destination directory to move the 
> source path under that directory, similar to HDFS.
> 
>
> Key: HADOOP-13402
> URL: https://issues.apache.org/jira/browse/HADOOP-13402
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> In HDFS, a rename to a destination path that is a pre-existing directory is 
> interpreted as moving the source path relative to that pre-existing 
> directory.  In S3A, this operation currently fails (does nothing and returns 
> {{false}}), unless that destination directory is empty.  This issue proposes 
> to change S3A to allow this behavior, so that it more closely matches the 
> semantics of HDFS and other file systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13811) s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to sanitize XML document destined for handler class

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13811:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> s3a: getFileStatus fails with com.amazonaws.AmazonClientException: Failed to 
> sanitize XML document destined for handler class
> -
>
> Key: HADOOP-13811
> URL: https://issues.apache.org/jira/browse/HADOOP-13811
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Sometimes, occasionally, getFileStatus() fails with a stack trace starting 
> with {{com.amazonaws.AmazonClientException: Failed to sanitize XML document 
> destined for handler class}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13402:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> S3A should allow renaming to a pre-existing destination directory to move the 
> source path under that directory, similar to HDFS.
> 
>
> Key: HADOOP-13402
> URL: https://issues.apache.org/jira/browse/HADOOP-13402
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> In HDFS, a rename to a destination path that is a pre-existing directory is 
> interpreted as moving the source path relative to that pre-existing 
> directory.  In S3A, this operation currently fails (does nothing and returns 
> {{false}}), unless that destination directory is empty.  This issue proposes 
> to change S3A to allow this behavior, so that it more closely matches the 
> semantics of HDFS and other file systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13636) s3a rm on the CLI generates deprecation warning on io.bytes.per.checksum

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13636:

Affects Version/s: 2.8.0
 Priority: Minor  (was: Major)

> s3a rm on the CLI generates deprecation warning on io.bytes.per.checksum
> 
>
> Key: HADOOP-13636
> URL: https://issues.apache.org/jira/browse/HADOOP-13636
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> an rm -r triggers a deprecation warning  of {{io.bytes.per.checksum}}. This 
> is a property that is not explicitly used anywhere in S3: something else is 
> asking for it —something that should switch to the new value
> {code}
> $ ./hadoop fs -rm -r s3a://XYZ/lib
> 16/09/21 16:24:04 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/21 16:24:05 INFO Configuration.deprecation: io.bytes.per.checksum is 
> deprecated. Instead, use dfs.bytes-per-checksum
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13636) s3a rm on the CLI generates deprecation warning on io.bytes.per.checksum

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13636:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> s3a rm on the CLI generates deprecation warning on io.bytes.per.checksum
> 
>
> Key: HADOOP-13636
> URL: https://issues.apache.org/jira/browse/HADOOP-13636
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> an rm -r triggers a deprecation warning  of {{io.bytes.per.checksum}}. This 
> is a property that is not explicitly used anywhere in S3: something else is 
> asking for it —something that should switch to the new value
> {code}
> $ ./hadoop fs -rm -r s3a://XYZ/lib
> 16/09/21 16:24:04 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/09/21 16:24:05 INFO Configuration.deprecation: io.bytes.per.checksum is 
> deprecated. Instead, use dfs.bytes-per-checksum
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13402:

Target Version/s:   (was: 2.8.0)

> S3A should allow renaming to a pre-existing destination directory to move the 
> source path under that directory, similar to HDFS.
> 
>
> Key: HADOOP-13402
> URL: https://issues.apache.org/jira/browse/HADOOP-13402
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> In HDFS, a rename to a destination path that is a pre-existing directory is 
> interpreted as moving the source path relative to that pre-existing 
> directory.  In S3A, this operation currently fails (does nothing and returns 
> {{false}}), unless that destination directory is empty.  This issue proposes 
> to change S3A to allow this behavior, so that it more closely matches the 
> semantics of HDFS and other file systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13648) s3a home directory to be "/"

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13648:

Parent Issue: HADOOP-13204  (was: HADOOP-11694)

> s3a home directory to be "/"
> 
>
> Key: HADOOP-13648
> URL: https://issues.apache.org/jira/browse/HADOOP-13648
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> The home directory of an s3a instances is {{/user/" + 
> System.getProperty("user.name"))}}. As HADOOP-12774 notes, it gets the user 
> wrong: if it were to be correct it should use the shortname of the current 
> principal.
> I don't think the username is valid here at all. s3a buckets are not 
> filesystems with users and permissions; all this per-user home dir appears to 
> do is cause confusion, and end up putting the output of an {{hadoop fs -rm}} 
> operation into a directory under it.
> If we made it "/" then it'd be the same for all users, and "/.Trash" would be 
> where deleted files get copied to



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Target Version/s: 2.9.0
  Status: Patch Available  (was: Open)

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Attachment: HADOOP-13336-HADOOP-13345-002.patch

Patch 002. This one I like

When a new FS instance with {{uri s3a://BUCKET.whatever}} is created, the 
supplied conf is cloned, with all {{fs.s3a.bucket.BUCKET}} properties copied 
onto the base `fs.s3a`. (excluding fs.s3a.impl and any attempts to overwrite 
those with fs.s3a.bucket). 

This lets you do things like declare different endpoints for different buckets:
{code}
  
fs.s3a.bucket.landsat-pds.endpoint
s3.amazonaws.com
The endpoint for s3a://landsat-pds URLs
  
{code}

It will also handle: auth mechanisms, fadvice policy, output tuning, etc, etc, 
so support: different buckets with different access accounts, remote locales, 
etc.

Test: yes, of base propagation. 
I've added an implicit one by removing the special code needed to let you 
specify a different endpoint for the test CSV file. Now, you can change the 
default fs.s3a.endpoint to somewhere like frankfurt, yet still use the landsat 
image, just by defining the new endpoint for this. 

Tested against s3a frankfurt, without the override (To verify the default 
endpoint is picked up), then again with the overridden endpoint.

Documentation. Yes, with examples covering endpoints and authentication. I also 
cut the section on CSV endpoint configuration, as its implicitly covered by the 
new stuff.

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch, 
> HADOOP-13336-HADOOP-13345-002.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805303#comment-15805303
 ] 

Hadoop QA commented on HADOOP-13953:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 6 unchanged - 1 fixed = 6 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13953 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846048/HADOOP-13953.02.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ee6a5c3031a2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2977bc6 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11381/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11381/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Make FTPFileSystem's data connection mode and transfer mode configurable
> 
>
> Key: HADOOP-13953
> URL: https://issues.apache.org/jira/browse/HADOOP-13953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.22.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13953.01.patch, HADOOP-13953.02.p

[jira] [Assigned] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-13908:
--

Assignee: Chris Nauroth  (was: Mingliang Liu)

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Chris Nauroth
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13908-HADOOP-13345.001.patch, HADOOP-13908-HADOOP-13345.002.patch, 
> HADOOP-13908-HADOOP-13345.002.patch, HADOOP-13908-HADOOP-13345.003.patch, 
> HADOOP-13908-HADOOP-13345.004.patch
>
>
> This was based on discussion in [HADOOP-13455]. Though we should not create 
> table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
> still have to get the existing table in 
> {{DynamoDBMetadataStore#initialize()}} and wait for its becoming active, 
> before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13908:
---
Assignee: Mingliang Liu  (was: Chris Nauroth)

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13908-HADOOP-13345.001.patch, HADOOP-13908-HADOOP-13345.002.patch, 
> HADOOP-13908-HADOOP-13345.002.patch, HADOOP-13908-HADOOP-13345.003.patch, 
> HADOOP-13908-HADOOP-13345.004.patch
>
>
> This was based on discussion in [HADOOP-13455]. Though we should not create 
> table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
> still have to get the existing table in 
> {{DynamoDBMetadataStore#initialize()}} and wait for its becoming active, 
> before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13931) S3AGuard: Use BatchWriteItem in DynamoDBMetadataStore#put()

2017-01-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13931:
---
   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

I have committed this patch to the HADOOP-13345 feature branch.

bq. I found they failed last time because I ran the tests both on my local 
machine and the AWS EC2 vm using the same S3 bucket at the same time, which is 
not supported I believe. Sorry for the last confusing comment.

No worries!   Yes, we try to do our best on test isolation within a single mvn 
run so that multiples test suites can run in parallel, but we don't currently 
attempt to isolate for multiple concurrent mvn runs against the same bucket.  
For some test suites, like the root path tests and the multi-part purge tests, 
it would probably be impossible.

> S3AGuard: Use BatchWriteItem in DynamoDBMetadataStore#put()
> ---
>
> Key: HADOOP-13931
> URL: https://issues.apache.org/jira/browse/HADOOP-13931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Rajesh Balamohan
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-13931-HADOOP-13345.000.patch, 
> HADOOP-13931-HADOOP-13345.001.patch
>
>
> Using {{batchWriteItem}} might be performant in 
> {{DynamoDBMetadataStore#put(DirListingMetadata meta)}} and  
> {{DynamoDBMetadataStore#put(PathMetadata meta)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS should not check in contract-test-options.xml

2017-01-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15805224#comment-15805224
 ] 

Chris Nauroth commented on HADOOP-13929:


I know John is making more changes, but here is my reply to the most recent 
comments.

Patch 005 deleted contract-test-options.xml, but kept it listed in .gitignore.  
I don't think it needs to remain in .gitignore, because there is no other file 
anywhere in the source tree named contract-test-options.xml, besides the ADL 
one that the patch deletes.

In general though, trying to achieve some commonality across these file names 
and using .gitignore entries that can cover all sub-modules globally sound like 
a great move for future-proofing this.

bq. Another question: can we add another property fs.contract.test.enabled 
(default true to be backwards compatible)?

I'm not sure I completely understand, but does the lack of fs in this new 
property mean that it would be global, not tied to a specific file system's 
tests?  If it's global, then a weakness of this approach is that for developers 
running {{mvn verify}} at the root of the source tree or at the root of 
hadoop-cloud-storage-project, it would be all or nothing.  If a developer had 
credentials to AWS and Azure but not OpenStack, then they'd need to do 
something different to run just for the modules where they have credentials.

> ADLS should not check in contract-test-options.xml
> --
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13953:
---
Attachment: HADOOP-13953.02.patch

> Make FTPFileSystem's data connection mode and transfer mode configurable
> 
>
> Key: HADOOP-13953
> URL: https://issues.apache.org/jira/browse/HADOOP-13953
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.22.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13953.01.patch, HADOOP-13953.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15804744#comment-15804744
 ] 

Hadoop QA commented on HADOOP-13650:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 8s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
0s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 44s{color} | {color:orange} root: The patch generated 39 new + 10 unchanged 
- 0 fixed = 49 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
16s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13650 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846013/HADOOP-13650-HADOOP-13345.004.patch
 |
| Optional Tests |  asflicense  compile  javac  java

[jira] [Updated] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2017-01-06 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13650:
---
Attachment: HADOOP-13650-HADOOP-13345.004.patch

Upload patch to fix findbugs and javadoc warnings.

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch, 
> HADOOP-13650-HADOOP-13345.001.patch, HADOOP-13650-HADOOP-13345.002.patch, 
> HADOOP-13650-HADOOP-13345.003.patch, HADOOP-13650-HADOOP-13345.004.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13336) S3A to support per-bucket configuration

2017-01-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13336:

Status: Open  (was: Patch Available)

> S3A to support per-bucket configuration
> ---
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13336-HADOOP-13345-001.patch
>
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2017-01-06 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15804365#comment-15804365
 ] 

Lei (Eddy) Xu commented on HADOOP-13650:


bq. DDB table region is always the s3 bucket region for simplicity.

Yes, I agree this.

bq. The general usage pattern is to specify the fs.defaultFS as s3://mybucket 
alike:

I was thinking the use cases such as using Hadoop to run ETL , which takes S3 
as input and output locations, as what AWS EMR does.  In such case, the 
computing cluster (i.e., Hadoop / Hive / Spark) here should set 
{{fs.defaultFS}} to the NameNode, because ETL pipelines use this HDFS cluster 
instead of S3 to store intermediate data. 

In the current {{DynamoDBMetadataStore#initialize(Configuration)}},  such case 
will raise {{Exception}} if we do not explicitly specify s3a URI in the CLI 
because it can not create {{S3AFileSystem}}. 

bq. we rely on the endpoint for determining the DDB region.

I am fine with that. 




> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch, 
> HADOOP-13650-HADOOP-13345.001.patch, HADOOP-13650-HADOOP-13345.002.patch, 
> HADOOP-13650-HADOOP-13345.003.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13774) Rest Loaded App fails

2017-01-06 Thread Omar Bouras (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15804254#comment-15804254
 ] 

Omar Bouras commented on HADOOP-13774:
--

Sorry for the missing information. Affect version updated

> Rest Loaded App fails
> -
>
> Key: HADOOP-13774
> URL: https://issues.apache.org/jira/browse/HADOOP-13774
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
> Environment: Hadoop Map Reduce REST
>Reporter: Omar Bouras
>Priority: Minor
> Attachments: Error_message.png, Failed_app_exec.png, WordCount.java, 
> app.json
>
>
> Hello,
> I am launching an app within a MR REST. This app executes well within Hadoop 
> normal invocation. However, when I use the rest 
> {code}
> curl -i -X POST -H 'Accept: application/json' -H 'Content-Type: 
> application/json' http://localhost:8088/ws/v1/cluster/apps?user.name=exo -d 
> @app.json
> {code} 
> 6/10/31 21:42:13 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 16/10/31 21:42:14 INFO input.FileInputFormat: Total input paths to process : 4
> 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: number of splits:4
> 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1477946173138_0003
> 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Kind: YARN_AM_RM_TOKEN, 
> Service: , Ident: (appAttemptId { application_id { id: 3 cluster_timestamp: 
> 1477946173138 } attemptId: 1 } keyId: -678745738)
> 16/10/31 21:42:15 INFO impl.YarnClientImpl: Submitted application 
> application_1477946173138_0003
> 16/10/31 21:42:15 INFO mapreduce.Job: The url to track the job: 
> http://MEA-029-L:8088/proxy/application_1477946173138_0003/
> 16/10/31 21:42:15 INFO mapreduce.Job: Running job: job_1477946173138_0003
> 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 running in 
> uber mode : false
> 16/10/31 21:52:53 INFO mapreduce.Job:  map 0% reduce 0%
> 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 failed with 
> state FAILED due to: Application application_1477946173138_0003 failed 1 
> times due to ApplicationMaster for attempt 
> appattempt_1477946173138_0003_01 timed out. Failing the application.
> 16/10/31 21:52:53 INFO mapreduce.Job: Counters: 0
> {code}
> The main job always ends fail with timeout. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13774) Rest Loaded App fails

2017-01-06 Thread Omar Bouras (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omar Bouras updated HADOOP-13774:
-
Affects Version/s: 2.7.3

> Rest Loaded App fails
> -
>
> Key: HADOOP-13774
> URL: https://issues.apache.org/jira/browse/HADOOP-13774
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.3
> Environment: Hadoop Map Reduce REST
>Reporter: Omar Bouras
>Priority: Minor
> Attachments: Error_message.png, Failed_app_exec.png, WordCount.java, 
> app.json
>
>
> Hello,
> I am launching an app within a MR REST. This app executes well within Hadoop 
> normal invocation. However, when I use the rest 
> {code}
> curl -i -X POST -H 'Accept: application/json' -H 'Content-Type: 
> application/json' http://localhost:8088/ws/v1/cluster/apps?user.name=exo -d 
> @app.json
> {code} 
> 6/10/31 21:42:13 INFO client.RMProxy: Connecting to ResourceManager at 
> /0.0.0.0:8032
> 16/10/31 21:42:14 INFO input.FileInputFormat: Total input paths to process : 4
> 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: number of splits:4
> 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_1477946173138_0003
> 16/10/31 21:42:14 INFO mapreduce.JobSubmitter: Kind: YARN_AM_RM_TOKEN, 
> Service: , Ident: (appAttemptId { application_id { id: 3 cluster_timestamp: 
> 1477946173138 } attemptId: 1 } keyId: -678745738)
> 16/10/31 21:42:15 INFO impl.YarnClientImpl: Submitted application 
> application_1477946173138_0003
> 16/10/31 21:42:15 INFO mapreduce.Job: The url to track the job: 
> http://MEA-029-L:8088/proxy/application_1477946173138_0003/
> 16/10/31 21:42:15 INFO mapreduce.Job: Running job: job_1477946173138_0003
> 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 running in 
> uber mode : false
> 16/10/31 21:52:53 INFO mapreduce.Job:  map 0% reduce 0%
> 16/10/31 21:52:53 INFO mapreduce.Job: Job job_1477946173138_0003 failed with 
> state FAILED due to: Application application_1477946173138_0003 failed 1 
> times due to ApplicationMaster for attempt 
> appattempt_1477946173138_0003_01 timed out. Failing the application.
> 16/10/31 21:52:53 INFO mapreduce.Job: Counters: 0
> {code}
> The main job always ends fail with timeout. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15804093#comment-15804093
 ] 

Mingliang Liu edited comment on HADOOP-13908 at 1/6/17 9:25 AM:


When I tested the v4 patch when the table was not existent (and auto-creation 
is enabled), I still got "requested resource not found" exception, which is not 
expected. There is some offline discussion with Chris and Steve, and I'll post 
the progress or new patch later.

{code}
$ mvn -Dtest=none -Dit.test='ITestS3A*' -q -Dscale clean verify 


---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Tests run: 10, Failures: 0, Errors: 10, Skipped: 0, Time elapsed: 7.569 sec <<< 
FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate)
  Time elapsed: 7.008 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: get on s3a://mliu-s3guard/test: 
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested 
resource not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: ooxx): Requested resource not found 
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: ooxx)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:94)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:309)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1607)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:1551)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1520)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2218)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
...
{code}


was (Author: liuml07):
When I tested the v4 patch when the table was not existent (and auto-creation 
is enabled), I still got "requested resource not found" exception, which is not 
expected. There is some offline discussion on-going, and I'll post the progress 
or new patch later.

{code}
$ mvn -Dtest=none -Dit.test='ITestS3A*' -q -Dscale clean verify 


---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Tests run: 10, Failures: 0, Errors: 10, Skipped: 0, Time elapsed: 7.569 sec <<< 
FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate)
  Time elapsed: 7.008 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: get on s3a://mliu-s3guard/test: 
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested 
resource not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: ooxx): Requested resource not found 
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: ooxx)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:94)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:309)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1607)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:1551)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1520)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2218)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
...
{code}

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu

[jira] [Commented] (HADOOP-13908) S3Guard: Existing tables may not be initialized correctly in DynamoDBMetadataStore

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15804093#comment-15804093
 ] 

Mingliang Liu commented on HADOOP-13908:


When I tested the v4 patch when the table was not existent (and auto-creation 
is enabled), I still got "requested resource not found" exception, which is not 
expected. There is some offline discussion on-going, and I'll post the progress 
or new patch later.

{code}
$ mvn -Dtest=none -Dit.test='ITestS3A*' -q -Dscale clean verify 


---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
Tests run: 10, Failures: 0, Errors: 10, Skipped: 0, Time elapsed: 7.569 sec <<< 
FAILURE! - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate
testOverwriteNonEmptyDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractCreate)
  Time elapsed: 7.008 sec  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSServiceIOException: get on s3a://mliu-s3guard/test: 
com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested 
resource not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: ooxx): Requested resource not found 
(Service: AmazonDynamoDBv2; Status Code: 400; Error Code: 
ResourceNotFoundException; Request ID: ooxx)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:94)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.get(DynamoDBMetadataStore.java:309)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1607)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:1551)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:1520)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2218)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
...
{code}

> S3Guard: Existing tables may not be initialized correctly in 
> DynamoDBMetadataStore
> --
>
> Key: HADOOP-13908
> URL: https://issues.apache.org/jira/browse/HADOOP-13908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13908-HADOOP-13345.000.patch, 
> HADOOP-13908-HADOOP-13345.001.patch, HADOOP-13908-HADOOP-13345.002.patch, 
> HADOOP-13908-HADOOP-13345.002.patch, HADOOP-13908-HADOOP-13345.003.patch, 
> HADOOP-13908-HADOOP-13345.004.patch
>
>
> This was based on discussion in [HADOOP-13455]. Though we should not create 
> table unless the config {{fs.s3a.s3guard.ddb.table.create}} is set true, we 
> still have to get the existing table in 
> {{DynamoDBMetadataStore#initialize()}} and wait for its becoming active, 
> before any table/item operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13931) S3AGuard: Use BatchWriteItem in DynamoDBMetadataStore#put()

2017-01-06 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15803082#comment-15803082
 ] 

Mingliang Liu edited comment on HADOOP-13931 at 1/6/17 8:58 AM:


[~cnauroth],

I can not reproduce the test failures above. I found they failed last time 
because I ran the tests both on my local machine and the AWS EC2 vm using the 
same S3 bucket at the same time, which is not supported I believe. Sorry for 
the last confusing comment.

I re-ran the tests multiple times (both on EC2 and my local machine) and got 
the following report, showing the patch is good as we know the failing ones are 
known and unrelated.
{code}
$ mvn -Dtest=none -Dit.test='ITestS3A*' -q -Dscale clean verify

Results :

Tests in error:
  ITestS3AAWSCredentialsProvider.testAnonymousProvider:133 » IO Failed to 
instan...
  
ITestS3AFileSystemContract>FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed:669->FileSystemContractBaseTest.rename:525
 » AWSServiceIO
  ITestS3ACredentialsInURL.testInvalidCredentialsFail:127 » AccessDenied 
s3a://m...

Tests run: 321, Failures: 0, Errors: 3, Skipped: 10
{code}


was (Author: liuml07):
[~cnauroth],

I can not reproduce the test failures above. I found they failed last time 
because I ran the tests both on my local machine and the AWS EC2 vm using the 
same S3 bucket, which is not supported. Sorry for the last confusing comment.

I ran the tests multiple times (both on EC2 and my local machine) and got the 
following report, showing the patch is good as we know the failing ones are 
known and unrelated.
{code}
$ mvn -Dtest=none -Dit.test='ITestS3A*' -q -Dscale clean verify

Results :

Tests in error:
  ITestS3AAWSCredentialsProvider.testAnonymousProvider:133 » IO Failed to 
instan...
  
ITestS3AFileSystemContract>FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed:669->FileSystemContractBaseTest.rename:525
 » AWSServiceIO
  ITestS3ACredentialsInURL.testInvalidCredentialsFail:127 » AccessDenied 
s3a://m...

Tests run: 321, Failures: 0, Errors: 3, Skipped: 10
{code}

> S3AGuard: Use BatchWriteItem in DynamoDBMetadataStore#put()
> ---
>
> Key: HADOOP-13931
> URL: https://issues.apache.org/jira/browse/HADOOP-13931
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Rajesh Balamohan
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-13931-HADOOP-13345.000.patch, 
> HADOOP-13931-HADOOP-13345.001.patch
>
>
> Using {{batchWriteItem}} might be performant in 
> {{DynamoDBMetadataStore#put(DirListingMetadata meta)}} and  
> {{DynamoDBMetadataStore#put(PathMetadata meta)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13953) Make FTPFileSystem's data connection mode and transfer mode configurable

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15804037#comment-15804037
 ] 

Hadoop QA commented on HADOOP-13953:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 6 unchanged - 1 fixed = 6 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-common-project/hadoop-common generated 6 new + 
0 unchanged - 0 fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  2s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.getTransferMode(Configuration)   At 
FTPFileSystem.java:== or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.getTransferMode(Configuration)   At 
FTPFileSystem.java:[line 186] |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.getTransferMode(Configuration)   At 
FTPFileSystem.java:== or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.getTransferMode(Configuration)   At 
FTPFileSystem.java:[line 183] |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.getTransferMode(Configuration)   At 
FTPFileSystem.java:== or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.getTransferMode(Configuration)   At 
FTPFileSystem.java:[line 181] |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.setDataConnectionMode(FTPClient, 
Configuration)   At FTPFileSystem.java:== or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.setDataConnectionMode(FTPClient, 
Configuration)   At FTPFileSystem.java:[line 218] |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.fs.ftp.FTPFileSystem.setDataConnectionMode(FTPClient, 
Configuration)   At FTPFileSystem.java:== or != in 
org.apache.hadoop.fs.ftp.FTPF

[jira] [Updated] (HADOOP-13315) FileContext#umask is not initialized properly

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13315:

Fix Version/s: (was: 2.9.0)

> FileContext#umask is not initialized properly
> -
>
> Key: HADOOP-13315
> URL: https://issues.apache.org/jira/browse/HADOOP-13315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13315.001.patch
>
>
> Notice field {{umask}} is not set to parameter {{theUmask}} and {{theUmask}} 
> is unused.
> {code:title=FileContext.java}
>   private FileContext(final AbstractFileSystem defFs,
> final FsPermission theUmask, final Configuration aConf) {
> defaultFS = defFs;
> umask = FsPermission.getUMask(aConf);
> conf = aConf;
> ...
>   public static FileContext getFileContext(final AbstractFileSystem defFS,
> final Configuration aConf) {
> return new FileContext(defFS, FsPermission.getUMask(aConf), aConf);
>   }
> {code}
> Proposal:
> * Set {{umask}} to {{theUmask}}. Since the only caller {{getFileContext}} 
> already passes the same value in {{theUmask}}, there is no change in behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13395) Enhance TestKMSAudit

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13395:

Fix Version/s: (was: 2.9.0)

> Enhance TestKMSAudit
> 
>
> Key: HADOOP-13395
> URL: https://issues.apache.org/jira/browse/HADOOP-13395
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13395.01.patch, HADOOP-13395.02.patch, 
> HADOOP-13395.03.patch
>
>
> This jira serves the goals:
> - Enhance existing test cases in TestKMSAudit, to rule out flakiness.
> - Add a new test case about formatting for different events.
> This will help us ensure audit log compatibility when we add a new log format 
> to KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12597) In kms-site.xml configuration "hadoop.security.keystore.JavaKeyStoreProvider.password" should be updated with new name

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12597:

Fix Version/s: (was: 2.9.0)

> In kms-site.xml configuration 
> "hadoop.security.keystore.JavaKeyStoreProvider.password" should be updated 
> with new name
> --
>
> Key: HADOOP-12597
> URL: https://issues.apache.org/jira/browse/HADOOP-12597
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.0
>Reporter: huangyitian
>Assignee: Surendra Singh Lilhore
>Priority: Minor
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HADOOP-12597-branch-2.7.patch, HDFS-8534.patch
>
>
> In http://hadoop.apache.org/docs/r2.7.0/hadoop-kms/index.html  :
> it mentioned as
> {code} 
> 
> hadoop.security.keystore.java-keystore-provider.password-file
> kms.keystore.password
>   
> {code}
> But in kms-site.xml the configuration name is wrong.
> {code}
>  hadoop.security.keystore.JavaKeyStoreProvider.password
> none
> 
>   If using the JavaKeyStoreProvider, the password for the keystore file.
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12605) Fix intermittent failure of TestIPC.testIpcWithReaderQueuing

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12605:

Fix Version/s: (was: 2.9.0)

> Fix intermittent failure of TestIPC.testIpcWithReaderQueuing
> 
>
> Key: HADOOP-12605
> URL: https://issues.apache.org/jira/browse/HADOOP-12605
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12605.001.patch, HADOOP-12605.002.patch, 
> HADOOP-12605.003.patch, HADOOP-12605.004.patch, HADOOP-12605.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12712) Fix some cmake plugin and native build warnings

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12712:

Fix Version/s: (was: 2.9.0)

> Fix some cmake plugin and native build warnings
> ---
>
> Key: HADOOP-12712
> URL: https://issues.apache.org/jira/browse/HADOOP-12712
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.4.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12712.001.patch, HADOOP-12712.002.patch, 
> HADOOP-12712.003.patch
>
>
> Fix some native build warnings



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12761) incremental maven build is not really incremental

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12761:

Fix Version/s: (was: 2.9.0)

> incremental maven build is not really incremental
> -
>
> Key: HADOOP-12761
> URL: https://issues.apache.org/jira/browse/HADOOP-12761
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
> Attachments: HADOOP-12761.01.patch
>
>
> For any version that uses v.3.1 of the maven-compiler-plugin, the incremental 
> maven build is basically broken. For most of the modules, an incremental 
> build ({{mvn install -DskipTests}} on an already built directory for example) 
> rebuilds the whole module again:
> {noformat}
> [INFO] 
> 
> [INFO] Building Apache Hadoop Common 3.0.0-SNAPSHOT
> [INFO] 
> 
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
> [INFO] Executing tasks
> main:
> [INFO] Executed tasks
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ 
> hadoop-common ---
> [INFO] No changes detected in protoc files, skipping generation.
> [INFO] 
> [INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:version-info (version-info) @ 
> hadoop-common ---
> [WARNING] [svn, info] failed with error code 1
> [INFO] SCM: GIT
> [INFO] Computed MD5: c8e92ce138fcd723204649e4d7c6ddd
> [INFO] 
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
> hadoop-common ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] Copying 7 resources
> [INFO] Copying 1 resource
> [INFO] 
> [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
> hadoop-common ---
> [INFO] Changes detected - recompiling the module!
> [INFO] Compiling 871 source files to 
> /Users/foo/hadoop/hadoop-common-project/hadoop-common/target/classes
> {noformat}
> It turns out that the incremental build feature of the maven compiler plugin 
> is basically broken at v3.1 (see 
> http://stackoverflow.com/questions/17944108/maven-compiler-plugin-always-detecting-a-set-of-sources-as-stale
>  and MCOMPILER-209). Ironically, this can be fixed by turning off the 
> "incremental build" configuration of the plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13707:

Fix Version/s: (was: 2.9.0)

> If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot 
> be accessed
> -
>
> Key: HADOOP-13707
> URL: https://issues.apache.org/jira/browse/HADOOP-13707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13707-branch-2-addendum.patch, 
> HADOOP-13707-branch-2.8.patch, HADOOP-13707-branch-2.patch, 
> HADOOP-13707.001.patch, HADOOP-13707.002.patch, HADOOP-13707.003.patch, 
> HADOOP-13707.004.patch
>
>
> In {{HttpServer2#hasAdministratorAccess}}, it uses 
> `hadoop.security.authorization` to detect whether HTTP is authenticated.
> It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If 
> Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, 
> such as "/logs", and it will return error message as below:
> {quote}
> HTTP ERROR 403
> Problem accessing /logs/. Reason:
> User dr.who is unauthorized to access this page.
> {quote}
> We should make sure {{HttpServletRequest#getAuthType}} is not null before we 
> invoke {{HttpServer2#hasAdministratorAccess}}.
> {{getAuthType}} means to get the authorization scheme of this request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12773) HBase classes fail to load with client/job classloader enabled

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12773:

Fix Version/s: (was: 2.9.0)

> HBase classes fail to load with client/job classloader enabled
> --
>
> Key: HADOOP-12773
> URL: https://issues.apache.org/jira/browse/HADOOP-12773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.3
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-12773.01.patch
>
>
> Currently if a user uses HBase and enables the client/job classloader, the 
> job fails to load HBase classes. For example,
> {noformat}
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/hbase/client/HBaseAdmin;
>   at java.lang.Class.getDeclaredFields0(Native Method)
>   at java.lang.Class.privateGetDeclaredFields(Class.java:2509)
>   at java.lang.Class.getDeclaredField(Class.java:1959)
>   at 
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1703)
>   at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:484)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.ObjectStreamClass.(ObjectStreamClass.java:472)
>   at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:369)
> {noformat}
> It is because the HBase classes (org.apache.hadoop.hbase.\*) meet the system 
> classes criteria which are supposed to be loaded strictly from the base 
> classloader. But hadoop does not provide HBase as a dependency.
> We should exclude the HBase classes from the system classes until/unless 
> HBase is provided by a future version of hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13290) Appropriate use of generics in FairCallQueue

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13290:

Fix Version/s: (was: 2.9.0)

> Appropriate use of generics in FairCallQueue
> 
>
> Key: HADOOP-13290
> URL: https://issues.apache.org/jira/browse/HADOOP-13290
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Jonathan Hung
>  Labels: newbie++
> Fix For: 2.8.0, 2.6.5, 2.7.4, 3.0.0-alpha1
>
> Attachments: HADOOP-13290.001.patch, HADOOP-13290.002.patch
>
>
> # {{BlockingQueue}} is intermittently used with and without generic 
> parameters in {{FairCallQueue}} class. Should be parameterized.
> # Same for {{FairCallQueue}}. Should be parameterized. Could be a bit more 
> tricky for that one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2017-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11361:

Fix Version/s: (was: 2.9.0)

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: supportability
> Fix For: 2.8.0, 2.6.5, 2.7.4, 3.0.0-alpha1
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361-006.patch, HADOOP-11361-007.patch, HADOOP-11361-009.patch, 
> HADOOP-11361.008.patch, HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org