[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624576#comment-15624576
 ] 

Yuanbo Liu commented on HADOOP-13773:
-

[~ferhui] Thanks for filing this jira.
{quote}
suggest uploading a patch file instead of github pull requests
{quote}
Agree with [~raviprak], please upload your patch.

Small suggestion about your code change:
{code}
if [ "$HADOOP_HEAPSIZE" == "" ];
{code}
please use "=" instead of "==" here. "==" is not defined in standard POSIX.

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Genmao Yu (JIRA)
Genmao Yu created HADOOP-13778:
--

 Summary: Unit test failed in TestAliyunOSSContractCreate
 Key: HADOOP-13778
 URL: https://issues.apache.org/jira/browse/HADOOP-13778
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/oss
Affects Versions: 3.0.0-alpha2
Reporter: Genmao Yu
Assignee: Genmao Yu
Priority: Minor


 {{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
HADOOP-13502, {{}fs.contract.create-visibility-delayed} need also to be added 
into {{aliyun-oss.xml}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13778:
---
Description:  {{fs.contract.is-blobstore}} is splitted  into more 
descriptive flags in HADOOP-13502, {{fs.contract.create-visibility-delayed}} 
need also to be added into {{aliyun-oss.xml}}   (was:  
{{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
HADOOP-13502, {{}fs.contract.create-visibility-delayed} need also to be added 
into {{aliyun-oss.xml}} )

> Unit test failed in TestAliyunOSSContractCreate
> ---
>
> Key: HADOOP-13778
> URL: https://issues.apache.org/jira/browse/HADOOP-13778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
>
>  {{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
> HADOOP-13502, {{fs.contract.create-visibility-delayed}} need also to be added 
> into {{aliyun-oss.xml}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13778 started by Genmao Yu.
--
> Unit test failed in TestAliyunOSSContractCreate
> ---
>
> Key: HADOOP-13778
> URL: https://issues.apache.org/jira/browse/HADOOP-13778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
>
>  {{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
> HADOOP-13502, {{fs.contract.create-visibility-delayed}} need also to be added 
> into {{aliyun-oss.xml}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13778:
---
Attachment: HADOOP-13778.001.patch

updates:

1. add {{fs.contract.create-visibility-delayed}} configuration for unit test.

> Unit test failed in TestAliyunOSSContractCreate
> ---
>
> Key: HADOOP-13778
> URL: https://issues.apache.org/jira/browse/HADOOP-13778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Attachments: HADOOP-13778.001.patch
>
>
>  {{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
> HADOOP-13502, {{fs.contract.create-visibility-delayed}} need also to be added 
> into {{aliyun-oss.xml}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13778:
---
Status: Patch Available  (was: In Progress)

> Unit test failed in TestAliyunOSSContractCreate
> ---
>
> Key: HADOOP-13778
> URL: https://issues.apache.org/jira/browse/HADOOP-13778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Attachments: HADOOP-13778.001.patch
>
>
>  {{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
> HADOOP-13502, {{fs.contract.create-visibility-delayed}} need also to be added 
> into {{aliyun-oss.xml}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624944#comment-15624944
 ] 

Genmao Yu commented on HADOOP-13778:


Result of unit test:

{code}
INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun ---
[INFO] Deleting 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun 
---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/src/main/resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun 
---
[INFO] Compiling 8 source files to 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 5 resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-aliyun ---
[INFO] Compiling 16 source files to 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun ---
[INFO] Surefire report directory: 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 5.121 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.246 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.485 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Running 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.581 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.053 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.701 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.834 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.606 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.062 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.459 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.

[jira] [Commented] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15624976#comment-15624976
 ] 

Hadoop QA commented on HADOOP-13778:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13778 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836318/HADOOP-13778.001.patch
 |
| Optional Tests |  asflicense  unit  xml  |
| uname | Linux 9c5749d2ab44 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 310aa46 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10943/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10943/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Unit test failed in TestAliyunOSSContractCreate
> ---
>
> Key: HADOOP-13778
> URL: https://issues.apache.org/jira/browse/HADOOP-13778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Attachments: HADOOP-13778.001.patch
>
>
>  {{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
> HADOOP-13502, {{fs.contract.create-visibility-delayed}} need also to be added 
> into {{aliyun-oss.xml}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13774) Rest Loaded App fails

2016-11-01 Thread Omar Bouras (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omar Bouras updated HADOOP-13774:
-
Attachment: Error_message.png
Failed_app_exec.png

In several times, the job of mapreduce ended with success. However the Job 
which launched it failed [^Failed_app_exec.png]
{code}
16/11/01 10:33:16 INFO client.RMProxy: Connecting to ResourceManager at 
/0.0.0.0:8032
16/11/01 10:33:18 INFO input.FileInputFormat: Total input paths to process : 4
16/11/01 10:33:18 INFO mapreduce.JobSubmitter: number of splits:4
16/11/01 10:33:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1477992554679_0002
16/11/01 10:33:18 INFO mapreduce.JobSubmitter: Kind: YARN_AM_RM_TOKEN, Service: 
, Ident: (appAttemptId { application_id { id: 1 cluster_timestamp: 
1477992554679 } attemptId: 1 } keyId: 1178737823)
16/11/01 10:33:19 INFO impl.YarnClientImpl: Submitted application 
application_1477992554679_0002
16/11/01 10:33:19 INFO mapreduce.Job: The url to track the job: 
http://MEA-029-L:8088/proxy/application_1477992554679_0002/
16/11/01 10:33:19 INFO mapreduce.Job: Running job: job_1477992554679_0002
16/11/01 10:46:00 INFO mapreduce.Job: Job job_1477992554679_0002 running in 
uber mode : false
16/11/01 10:46:00 INFO mapreduce.Job:  map 0% reduce 0%
16/11/01 10:46:10 INFO mapreduce.Job:  map 75% reduce 0%
16/11/01 10:46:11 INFO mapreduce.Job:  map 100% reduce 0%
16/11/01 10:46:15 INFO mapreduce.Job:  map 100% reduce 100%
16/11/01 10:46:16 INFO mapreduce.Job: Job job_1477992554679_0002 completed 
successfully
16/11/01 10:46:16 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=47766
FILE: Number of bytes written=688270
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=105882
HDFS: Number of bytes written=33400
HDFS: Number of read operations=15
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters 
Launched map tasks=4
Launched reduce tasks=1
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=30786
Total time spent by all reduces in occupied slots (ms)=2869
Total time spent by all map tasks (ms)=30786
Total time spent by all reduce tasks (ms)=2869
Total vcore-milliseconds taken by all map tasks=30786
Total vcore-milliseconds taken by all reduce tasks=2869
Total megabyte-milliseconds taken by all map tasks=31524864
Total megabyte-milliseconds taken by all reduce tasks=2937856
Map-Reduce Framework
Map input records=2161
Map output records=14799
Map output bytes=161969
Map output materialized bytes=47784
Input split bytes=413
Combine input records=14799
Combine output records=2988
Reduce input groups=2667
Reduce shuffle bytes=47784
Reduce input records=2988
Reduce output records=2667
Spilled Records=5976
Shuffled Maps =4
Failed Shuffles=0
Merged Map outputs=4
GC time elapsed (ms)=1199
CPU time spent (ms)=5150
Physical memory (bytes) snapshot=1253482496
Virtual memory (bytes) snapshot=9562353664
Total committed heap usage (bytes)=925892608
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters 
Bytes Read=105469
File Output Format Counters 
Bytes Written=33400
{code}

It always end with the same error message [^Error_message.png].
even with increasing the *mapreduce.task.timeout*  
{code}

  mapreduce.task.timeout
  180
  The number of milliseconds before a task will be
  terminated if it neither reads an input, writes an output, nor
  updates its status string.  A value of 0 disables the timeout.
  

{code}

> Rest Loaded App fails
> -
>
> Key: HADOOP-13774
> URL: https://issues.apache.org/jira/browse/HADOOP-13774
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Hadoop Map Reduce REST
>Reporter: Omar Bouras
>Priority: Minor
> Fix For: 2.7.3
>
> Attachments: Error_message.png, Failed_app_exec.png, WordCount.java, 
> app.jso

[jira] [Commented] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625025#comment-15625025
 ] 

ASF GitHub Bot commented on HADOOP-13680:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/145


> fs.s3a.readahead.range to use getLongBytes
> --
>
> Key: HADOOP-13680
> URL: https://issues.apache.org/jira/browse/HADOOP-13680
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
> Fix For: 2.8.0
>
> Attachments: HADOOP-13680-branch-2-004.patch, HADOOP-13680.001.patch
>
>
> The {{fs.s3a.readahead.range}} value is measured in bytes, but can be 
> hundreds of KB. Easier to use getLongBytes and set to things like "300k"
> This will be backwards compatible with the existing settings if anyone is 
> using them, because the no-prefix default will still be bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625029#comment-15625029
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/136


> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0, 3.0.0-alpha2
>
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor an AbstractService

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625035#comment-15625035
 ] 

ASF GitHub Bot commented on HADOOP-12321:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/54


> Make JvmPauseMonitor an AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12984) Add GenericTestUtils.getTestDir method and use it for temporary directory in tests

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625037#comment-15625037
 ] 

ASF GitHub Bot commented on HADOOP-12984:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/53


> Add GenericTestUtils.getTestDir method and use it for temporary directory in 
> tests
> --
>
> Key: HADOOP-12984
> URL: https://issues.apache.org/jira/browse/HADOOP-12984
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-alpha1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12984-003.patch, 
> HADOOP-12984.branch-2.8.00.patch, HDFS-9263-001.patch, HDFS-9263-002.patch, 
> HDFS-9263-003.patch
>
>
> We have seen some tests had been used the path {{test/build/data}} to store 
> files, so leaking files which fail the new post-build RAT test checks on 
> Jenkins (and dirtying all development systems with paths which {{mvn clean}} 
> will miss.
> In order not to occur these bugs such as MAPREDUCE-6589 and HDFS-9571 again, 
> we'd like to introduce new utility methods to get a temporary directory path 
> easily, and use the methods in tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13649) s3guard: implement time-based (TTL) expiry for LocalMetadataStore

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625054#comment-15625054
 ] 

Steve Loughran commented on HADOOP-13649:
-

One thing to consider is that the local store is just a cache, and so Guava's 
com.google.common.cache.Cache class may be the place to start coding

> s3guard: implement time-based (TTL) expiry for LocalMetadataStore
> -
>
> Key: HADOOP-13649
> URL: https://issues.apache.org/jira/browse/HADOOP-13649
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> LocalMetadataStore is primarily a reference implementation for testing.  It 
> may be useful in narrow circumstances where the workload can tolerate 
> short-term lack of inter-node consistency:  Being in-memory, one JVM/node's 
> LocalMetadataStore will not see another node's changes to the underlying 
> filesystem.
> To put a bound on the time during which this inconsistency may occur, we 
> should implement time-based (a.k.a. Time To Live / TTL)  expiration for 
> LocalMetadataStore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625061#comment-15625061
 ] 

Steve Loughran commented on HADOOP-13650:
-

HADOOP-13311 proposes an entry point for s3 commands.

s3guard should integrate there

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13311) S3A shell entry point to support commands specific to S3A.

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625072#comment-15625072
 ] 

Steve Loughran commented on HADOOP-13311:
-

all s3guard operations should go through here too, as covered in HADOOP-13650.

* s3a s3guard 
** perform a specific s3guard action: build tables, validate tables, delete 
tables, 



> S3A shell entry point to support commands specific to S3A.
> --
>
> Key: HADOOP-13311
> URL: https://issues.apache.org/jira/browse/HADOOP-13311
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> Create a new {{s3a}} shell entry point.  This can support diagnostic and 
> administrative commands that are specific to S3A and wouldn't make sense to 
> group under existing scripts like {{hadoop}} or {{hdfs}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625141#comment-15625141
 ] 

Steve Loughran commented on HADOOP-13590:
-

I'm looking at {{getMaxTgtRenewalRetryCount}} and not sure I understand it. I'm 
confident I could work out what it does given time, but it's not immediately 
obvious. What does it do? And could some comments in the code describe that

Test-wise, I've added support for more backoff in tests that wait; look in 
LambdaTestUtils.

I also see that the code to set up a 
{{javax.security.auth.login.Configuration}} is surfacing again...there are a 
fair few copies of this code around (I know one in the registry tests), all of 
which have those same hacks for IBM JVMs, etc. Which makes it a maintenance 
pain. Could we think about coalescing them into one or two utility methods 
somewhere?

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13680) fs.s3a.readahead.range to use getLongBytes

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625184#comment-15625184
 ] 

ASF GitHub Bot commented on HADOOP-13680:
-

Github user abmodi closed the pull request at:

https://github.com/apache/hadoop/pull/144


> fs.s3a.readahead.range to use getLongBytes
> --
>
> Key: HADOOP-13680
> URL: https://issues.apache.org/jira/browse/HADOOP-13680
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
> Fix For: 2.8.0
>
> Attachments: HADOOP-13680-branch-2-004.patch, HADOOP-13680.001.patch
>
>
> The {{fs.s3a.readahead.range}} value is measured in bytes, but can be 
> hundreds of KB. Easier to use getLongBytes and set to things like "300k"
> This will be backwards compatible with the existing settings if anyone is 
> using them, because the no-prefix default will still be bytes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13779) s3a to skip s3 bucket existence check in initialize() for faster creation

2016-11-01 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13779:
---

 Summary: s3a to skip s3 bucket existence check in initialize() for 
faster creation
 Key: HADOOP-13779
 URL: https://issues.apache.org/jira/browse/HADOOP-13779
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Rajesh Balamohan
Priority: Minor


The {{verifyBucketExists()}} makes an HTTPS HEAD request of a bucket when 
creating an FS, adding an extra HTTPS call of a few hundred millis. (as this is 
the first call, cost may be amplified for DNS lookup, thread pool creation, 
etc, so may appear more expensive).

If a bucket doesn't exist, the first actual client-initiated operation (get, 
list, put) will trigger a failure, so the call could potentially be stripped 
out.

# it will complicate failure reporting, if you want to distinguish "unauthed" 
and "not found" on a blob from those on a bucket.
# the fact that the first HTTPS request probably includes first-HTTPS call 
overhead (pool creation, DNS, any certificate checking) may make that first 
call appear more expensive than it is...those actions would just take place on 
the first client-initiated call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625235#comment-15625235
 ] 

Steve Loughran commented on HADOOP-13651:
-

# I've actually been talking with [~rajesh.balamohan] about pulling that 
initial bucket check (HADOOP-13379)  . It adds measurable delays to all FS 
instance creation —and permission errors will show up later on anyway. The 
tricky bit is having a later 40x failure be uprated to a "there is no such 
bucket" rather than "you can't access a file". You can save 500+mS by removing 
an otherwise needless HTTP request; sometimes it can even take longer. I think 
I would like to cut it, if the failure can be graceful (some tracking of if a 
request has ever succeeded, on first auth failure, go from simple translation 
to adding "check bucket exists"/actually falling back to a second check)

# if a user has valid read credentials, bucket exist check (currently) fails in 
init. IF this is delayed, then the first s3 read/write will fail
# bucket nonexistent -> 404? 410? -> FNFE
# bucket exists but caller not authed -> 401? 403? -> AccessDeniedException 
# if a user has read but not write credentials, any attempt to do multipart 
purge will fail; that's now caught & downgraded.
# if a user has no credentials, then, if the auth chain has to be set up to 
allow anonymous access, then they'll try an anonymous auth (not a default 
option), and they'll get read access to any bucket declared public readable.



> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13603) Ignore package line length checkstyle rule

2016-11-01 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13603:
---
Hadoop Flags: Reviewed
 Summary: Ignore package line length checkstyle rule  (was: Remove 
package line length checkstyle rule)

> Ignore package line length checkstyle rule
> --
>
> Key: HADOOP-13603
> URL: https://issues.apache.org/jira/browse/HADOOP-13603
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: HADOOP-13603.001.patch
>
>
> The packages related to the DockerLinuxContainerRuntime all exceed the 80 
> char line length limit enforced by checkstyle. This causes every build to 
> fail with a -1. I would like to exclude this rule from causing a failure.
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
> Line is longer than 80 characters (found 84).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
> Line is longer than 80 characters (found 81).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> {code}
> Alternatively, we could look to restructure the packages here, but I question 
> what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13603) Ignore package line length checkstyle rule

2016-11-01 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13603:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks 
[~shaneku...@gmail.com] for the contribution!

> Ignore package line length checkstyle rule
> --
>
> Key: HADOOP-13603
> URL: https://issues.apache.org/jira/browse/HADOOP-13603
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13603.001.patch
>
>
> The packages related to the DockerLinuxContainerRuntime all exceed the 80 
> char line length limit enforced by checkstyle. This causes every build to 
> fail with a -1. I would like to exclude this rule from causing a failure.
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
> Line is longer than 80 characters (found 84).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
> Line is longer than 80 characters (found 81).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> {code}
> Alternatively, we could look to restructure the packages here, but I question 
> what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13778) Unit test failed in TestAliyunOSSContractCreate

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625250#comment-15625250
 ] 

Steve Loughran commented on HADOOP-13778:
-

Have you tested this agains the infrastructure?

> Unit test failed in TestAliyunOSSContractCreate
> ---
>
> Key: HADOOP-13778
> URL: https://issues.apache.org/jira/browse/HADOOP-13778
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Attachments: HADOOP-13778.001.patch
>
>
>  {{fs.contract.is-blobstore}} is splitted  into more descriptive flags in 
> HADOOP-13502, {{fs.contract.create-visibility-delayed}} need also to be added 
> into {{aliyun-oss.xml}} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13614) Purge some superfluous/obsolete S3 FS tests that are slowing test runs down

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625266#comment-15625266
 ] 

ASF GitHub Bot commented on HADOOP-13614:
-

Github user steveloughran closed the pull request at:

https://github.com/apache/hadoop/pull/132


> Purge some superfluous/obsolete S3 FS tests that are slowing test runs down
> ---
>
> Key: HADOOP-13614
> URL: https://issues.apache.org/jira/browse/HADOOP-13614
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13614-branch-2-001.patch, 
> HADOOP-13614-branch-2-002.patch, HADOOP-13614-branch-2-002.patch, 
> HADOOP-13614-branch-2-004.patch, HADOOP-13614-branch-2-005.patch, 
> HADOOP-13614-branch-2-006.patch, HADOOP-13614-branch-2-007.patch, 
> HADOOP-13614-branch-2-008.patch, testrun.txt
>
>
> Some of the slow test cases contain tests that are now obsoleted by newer 
> ones. For example, {{ITestS3ADeleteManyFiles}} has the test case 
> {{testOpenCreate()}} which writes then reads files up 25 MB.
> Have a look at which of the s3a tests are taking time, review them to see if 
> newer tests have superceded the slow ones; and cut them where appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625286#comment-15625286
 ] 

Steve Loughran commented on HADOOP-13651:
-

bq. What needs to be done before we can commit this patch (besides the 
LOG.isDebugEnabled thing)? 

Well, it is just a branch, so it's not expected to be perfect.

What will happen is once the merge is in, the branch is going to have to be 
kept in sync with a piece of code which still has a major rate of change. 
Someone is going to have to volunteer to do that regularly, or share the 
workload. I'd recommend a process of:

# checkout and build trunk
# run the hadoop-aws integration tests on trunk
# if the tests all worked, checkout this branch, rebase onto it, rerun those 
integration tests
# if the tests didn't work: and they used to: consider reporting a problem, 
ideally after checking to see if something hasn't been opened already/your test 
setup hasn't regressed.

Step 2 makes sure that you don't rebase on broken code, and so get confused 
that test failures are your fault.

I'd recommend doing this weekly, or if there's been a big change gone into s3a. 
It's already time to do this, given the current difference between the branch 
and trunk

> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13603) Ignore package line length checkstyle rule

2016-11-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625290#comment-15625290
 ] 

Hudson commented on HADOOP-13603:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10743 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10743/])
HADOOP-13603. Ignore package line length checkstyle rule. Contributed by 
(aajisaka: rev 5b577f43a269caeee776a59695427985d0cd1697)
* (edit) hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml


> Ignore package line length checkstyle rule
> --
>
> Key: HADOOP-13603
> URL: https://issues.apache.org/jira/browse/HADOOP-13603
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13603.001.patch
>
>
> The packages related to the DockerLinuxContainerRuntime all exceed the 80 
> char line length limit enforced by checkstyle. This causes every build to 
> fail with a -1. I would like to exclude this rule from causing a failure.
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
> Line is longer than 80 characters (found 84).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
> Line is longer than 80 characters (found 81).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> {code}
> Alternatively, we could look to restructure the packages here, but I question 
> what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13603) Ignore package line length checkstyle rule

2016-11-01 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625294#comment-15625294
 ] 

Shane Kumpf commented on HADOOP-13603:
--

Thanks [~ajisakaa]!

> Ignore package line length checkstyle rule
> --
>
> Key: HADOOP-13603
> URL: https://issues.apache.org/jira/browse/HADOOP-13603
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13603.001.patch
>
>
> The packages related to the DockerLinuxContainerRuntime all exceed the 80 
> char line length limit enforced by checkstyle. This causes every build to 
> fail with a -1. I would like to exclude this rule from causing a failure.
> {code}
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/package-info.java:23:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/MockPrivilegedOperationCaptor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged;: 
> Line is longer than 80 characters (found 84).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerRuntimeTestingUtils.java:17:package
>  org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime;: 
> Line is longer than 80 characters (found 81).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/MockDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerContainerStatusHandler.java:17:package
>  
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;:
>  Line is longer than 80 characters (found 88).
> {code}
> Alternatively, we could look to restructure the packages here, but I question 
> what value this check really provides.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-11-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625300#comment-15625300
 ] 

Steve Loughran commented on HADOOP-13651:
-

..regarding the isEmptyDir logic, if you have a plan to address that, then yes, 
get the merge in, with a separate task to clean up the logic. It may render 
HADOOP-13736 obsolete too, so maybe hold back on that change until there's 
redone empty dir checks. This would keep the cache more generic

> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13651-HADOOP-13345.001.patch, 
> HADOOP-13651-HADOOP-13345.002.patch, HADOOP-13651-HADOOP-13345.003.patch
>
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-6801) io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are still in CommonConfigurationKeysPublic.java and used in SequenceFile.java

2016-11-01 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625461#comment-15625461
 ] 

Harsh J commented on HADOOP-6801:
-

- Updated patch addresses the checkstyle issues

> io.sort.mb and io.sort.factor were renamed and moved to mapreduce but are 
> still in CommonConfigurationKeysPublic.java and used in SequenceFile.java
> ---
>
> Key: HADOOP-6801
> URL: https://issues.apache.org/jira/browse/HADOOP-6801
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Erik Steffl
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-6801.05.patch, HADOOP-6801.r1.diff, 
> HADOOP-6801.r2.diff, HADOOP-6801.r3.diff, HADOOP-6801.r4.diff, 
> HADOOP-6801.r5.diff
>
>
> Following configuration keys in CommonConfigurationKeysPublic.java (former 
> CommonConfigurationKeys.java):
> public static final String  IO_SORT_MB_KEY = "io.sort.mb";
> public static final String  IO_SORT_FACTOR_KEY = "io.sort.factor";
> are partially moved:
>   - they were renamed to mapreduce.task.io.sort.mb and 
> mapreduce.task.io.sort.factor respectively
>   - they were moved to mapreduce project, documented in mapred-default.xml
> However:
>   - they are still listed in CommonConfigurationKeysPublic.java as quoted 
> above
>   - strings "io.sort.mb" and "io.sort.factor" are used in SequenceFile.java 
> in Hadoop Common project
> Not sure what the solution is, these constants should probably be removed 
> from CommonConfigurationKeysPublic.java but I am not sure what's the best 
> solution for SequenceFile.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Fei Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HADOOP-13773:
-
Attachment: HADOOP-13773.patch

thanks for Ravi Prakash  and Yuanbo Liu 's suggestions, the patch is uploaded
please review it again.thanks

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13768:
---
Attachment: HADOOP-13768.001.patch

> AliyunOSS: DeleteObjectsRequest has 1000 objects limit 
> ---
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-01 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625616#comment-15625616
 ] 

Sean Busbey commented on HADOOP-11804:
--

Thanks for the review! I'll update to latest trunk as a part of getting the 
rest of the feedback addressed.

I'm a bit concerned about the build failure, since even if you review against 
my current commit (dbd2057), the hygiene check there (that the same class 
doesn't end up in more than one of the shaded jars) would still have failed 
later. I'm confused about test protobuf classes showing up in the client-api 
jar, but hopefully the cause will be obvious once I rebase.

I'm planning to un-shade logging and htrace today, so if you'd prefer I not do 
that please let me know. A related question, do we have a project-wide logging 
framework?

{code}
Busbey-MBA:hadoop busbey$ git grep -l  "org.slf4j" | grep "\\.java" | wc -l
 320
Busbey-MBA:hadoop busbey$ git grep -l  "org.apache.commons.logging" | grep 
"\\.java" | wc -l
1410
Busbey-MBA:hadoop busbey$ git grep -l  "java.util.logging" | grep "\\.java" | 
wc -l
   2
Busbey-MBA:hadoop busbey$ git grep -l  "org.apache.log4j" | grep "\\.java" | wc 
-l
 217
Busbey-MBA:hadoop busbey$ git grep -l  "org.apache.logging.log4j" | grep 
"\\.java" | wc -l
   0
{code}

Should I just un-shade all the logging libraries?

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13768:
---
Status: Patch Available  (was: In Progress)

> AliyunOSS: DeleteObjectsRequest has 1000 objects limit 
> ---
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625622#comment-15625622
 ] 

Genmao Yu commented on HADOOP-13768:


Result of unit test:
{code}
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun ---
[INFO] Deleting 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun 
---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/src/main/resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun 
---
[INFO] Compiling 8 source files to 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/classes
[INFO] 
[INFO] --- maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun ---
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-aliyun ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 5 resources
[INFO] Copying 2 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-aliyun ---
[INFO] Compiling 16 source files to 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun ---
[INFO] Surefire report directory: 
/Users/genmao.ygm/opensource/hadoop/hadoop-tools/hadoop-aliyun/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.889 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.107 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.773 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Running 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.87 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.968 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.411 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.074 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.294 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.641 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.146 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25

[jira] [Updated] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13768:
---
Attachment: (was: HADOOP-13768.001.patch)

> AliyunOSS: DeleteObjectsRequest has 1000 objects limit 
> ---
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13768:
---
Attachment: HADOOP-13768.001.patch

> AliyunOSS: DeleteObjectsRequest has 1000 objects limit 
> ---
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13769) AliyunOSS object inputstream.close() will read the remaining bytes of the OSS object, potentially transferring a lot of bytes from OSS that are discarded.

2016-11-01 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625635#comment-15625635
 ] 

Genmao Yu commented on HADOOP-13769:


The {{input stream close issue}} has been fixed by Aliyun OSS, so we can just 
upgrade the aliyun oss sdk simply in this case.

> AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
> object, potentially transferring a lot of bytes from OSS that are discarded.
> --
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
> Fix For: 3.0.0-alpha2
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13660) Upgrade commons-configuration version

2016-11-01 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13660:
---
Attachment: HADOOP-13660.006.patch

Added a findbugs exclusion for the error, for the reasons listed above. No 
other changes from .005 to .006.

> Upgrade commons-configuration version
> -
>
> Key: HADOOP-13660
> URL: https://issues.apache.org/jira/browse/HADOOP-13660
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13660-configuration2.001.patch, 
> HADOOP-13660.001.patch, HADOOP-13660.002.patch, HADOOP-13660.003.patch, 
> HADOOP-13660.004.patch, HADOOP-13660.005.patch, HADOOP-13660.006.patch
>
>
> We're currently pulling in version 1.6 - I think we should upgrade to the 
> latest 1.10.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625669#comment-15625669
 ] 

Hadoop QA commented on HADOOP-13768:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-tools_hadoop-aliyun generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836363/HADOOP-13768.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 27c35cb89f4a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5b577f4 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10944/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10944/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10944/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: DeleteObjectsRequest has 1000 objects limit 
> ---
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assi

[jira] [Created] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts

2016-11-01 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-13780:


 Summary: LICENSE/NOTICE are out of date for source artifacts
 Key: HADOOP-13780
 URL: https://issues.apache.org/jira/browse/HADOOP-13780
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.0-alpha2
Reporter: Sean Busbey
Priority: Blocker


we need to perform a check that all of our bundled works are properly accounted 
for in our LICENSE/NOTICE files.

At a minimum, it looks like HADOOP-10075 introduced some changes that have not 
been accounted for.

e.g. the jsTree plugin found at 
{{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}}
 does not show up in LICENSE.txt to (a) indicate that we're redistributing it 
under the MIT option and (b) give proper citation of the original copyright 
holder per ASF policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts

2016-11-01 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625680#comment-15625680
 ] 

Sean Busbey commented on HADOOP-13780:
--

Important to note that the jsTree example is not meant to be exhaustive; I did 
not look to see what else wasn't updated I just randomly searched for a 
copyright string. I also have not yet looked to see if the binary bundlings 
properly account for the update (see HBASE-12894 for where folks over in HBase 
are checking for the same).

> LICENSE/NOTICE are out of date for source artifacts
> ---
>
> Key: HADOOP-13780
> URL: https://issues.apache.org/jira/browse/HADOOP-13780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: Sean Busbey
>Priority: Blocker
>
> we need to perform a check that all of our bundled works are properly 
> accounted for in our LICENSE/NOTICE files.
> At a minimum, it looks like HADOOP-10075 introduced some changes that have 
> not been accounted for.
> e.g. the jsTree plugin found at 
> {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}}
>  does not show up in LICENSE.txt to (a) indicate that we're redistributing it 
> under the MIT option and (b) give proper citation of the original copyright 
> holder per ASF policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13768:
---
Attachment: HADOOP-13768.002.patch

> AliyunOSS: DeleteObjectsRequest has 1000 objects limit 
> ---
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13769) AliyunOSS object inputstream.close() will read the remaining bytes of the OSS object, potentially transferring a lot of bytes from OSS that are discarded.

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13769:
---
Status: Patch Available  (was: Open)

> AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
> object, potentially transferring a lot of bytes from OSS that are discarded.
> --
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13769.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13769) AliyunOSS object inputstream.close() will read the remaining bytes of the OSS object, potentially transferring a lot of bytes from OSS that are discarded.

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13769:
---
Attachment: HADOOP-13769.001.patch

Update: just upgrade the aliyun oss sdk version

> AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
> object, potentially transferring a lot of bytes from OSS that are discarded.
> --
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13769.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13769) AliyunOSS object inputstream.close() will read the remaining bytes of the OSS object, potentially transferring a lot of bytes from OSS that are discarded.

2016-11-01 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu reassigned HADOOP-13769:
--

Assignee: Genmao Yu

> AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
> object, potentially transferring a lot of bytes from OSS that are discarded.
> --
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13769.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13769) AliyunOSS object inputstream.close() will read the remaining bytes of the OSS object, potentially transferring a lot of bytes from OSS that are discarded.

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625785#comment-15625785
 ] 

Hadoop QA commented on HADOOP-13769:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836377/HADOOP-13769.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 4ddd58c3be67 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5b577f4 |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10945/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10945/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
> object, potentially transferring a lot of bytes from OSS that are discarded.
> --
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13769.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additi

[jira] [Commented] (HADOOP-13768) AliyunOSS: DeleteObjectsRequest has 1000 objects limit

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625802#comment-15625802
 ] 

Hadoop QA commented on HADOOP-13768:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836376/HADOOP-13768.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cabc03927a27 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5b577f4 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10946/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10946/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: DeleteObjectsRequest has 1000 objects limit 
> ---
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

--

[jira] [Resolved] (HADOOP-13729) switch to Configuration.getLongBytes for byte options

2016-11-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13729.
-
   Resolution: Duplicate
 Assignee: Abhishek Modi  (was: Steve Loughran)
Fix Version/s: 2.8.0

> switch to Configuration.getLongBytes for byte options
> -
>
> Key: HADOOP-13729
> URL: https://issues.apache.org/jira/browse/HADOOP-13729
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Abhishek Modi
> Fix For: 2.8.0
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> It's fiddly working out how many bytes to use for 128MB readahead, equally 
> hard to work out what a value actually means.
> If we switch to {{Configuration.getLongBytes()}} for reading in readahead, 
> partition and threshold values, all existing configs will work, but new 
> configs can use K, M, G suffices.
> Easy to code; should add a new test/adapt existing ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-11-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625952#comment-15625952
 ] 

ASF GitHub Bot commented on HADOOP-13309:
-

Github user cnauroth closed the pull request at:

https://github.com/apache/hadoop/pull/138


> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11229) JobStoryProducer is not closed upon return from Gridmix#setupDistCacheEmulation()

2016-11-01 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625962#comment-15625962
 ] 

Ted Yu commented on HADOOP-11229:
-

Agree with Tsuyoshi Ozawa.

> JobStoryProducer is not closed upon return from 
> Gridmix#setupDistCacheEmulation()
> -
>
> Key: HADOOP-11229
> URL: https://issues.apache.org/jira/browse/HADOOP-11229
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11229.v3.patch, HADOOP-11229_001.patch, 
> HADOOP-11229_002.patch
>
>
> Here is related code:
> {code}
>   JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
>   exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
> {code}
> jsp should be closed upon return from setupDistCacheEmulation().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13590) Retry until TGT expires even if the UGI renewal thread encountered exception

2016-11-01 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15625994#comment-15625994
 ] 

Xiao Chen commented on HADOOP-13590:


Thanks [~ste...@apache.org] for the prompt review!
Good point on {{getMaxTgtRenewalRetryCount}}, on a second thought I think it 
can be eliminated, so the retry policy goes to {{Int.MAX_VALUE}} and we simply 
check it against the end time. Currently it's only making sure we can create 
the RetryPolicy with correct maxRetries. Will do that in the next patch, and 
add comments.

bq. Test-wise, I've added support for more backoff in tests that wait; look in 
LambdaTestUtils.
Thanks for the good work, let me try replace the GenericTestUtil usage with it.

bq. I also see that the code to set up a 
javax.security.auth.login.Configuration is surfacing again...
See my [comment 
above|https://issues.apache.org/jira/browse/HADOOP-13590?focusedCommentId=15517201&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15517201],
 it's due to conflicting class name {{Configuration}} in hadoop and in javax. I 
guess we'll have to explicitly define one way or the other. :(
Happy to wrap up a utility function to clean up all IBM hacks etc., I propose 
to create a separate jira to limit scope of this one. Please let me know if you 
feel otherwise.

> Retry until TGT expires even if the UGI renewal thread encountered exception
> 
>
> Key: HADOOP-13590
> URL: https://issues.apache.org/jira/browse/HADOOP-13590
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13590.01.patch, HADOOP-13590.02.patch, 
> HADOOP-13590.03.patch, HADOOP-13590.04.patch, HADOOP-13590.05.patch, 
> HADOOP-13590.06.patch, HADOOP-13590.07.patch, HADOOP-13590.08.patch
>
>
> The UGI has a background thread to renew the tgt. On exception, it 
> [terminates 
> itself|https://github.com/apache/hadoop/blob/bee9f57f5ca9f037ade932c6fd01b0dad47a1296/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1013-L1014]
> If something temporarily goes wrong that results in an IOE, even if it 
> recovered no renewal will be done and client will eventually fail to 
> authenticate. We should retry with our best effort, until tgt expires, in the 
> hope that the error recovers before that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626097#comment-15626097
 ] 

Andrew Wang commented on HADOOP-11804:
--

Thanks for the quick response Sean,

bq. I'm confused about test protobuf classes showing up in the client-api jar...

Yea, this was a mystery to me too. Could be PEBKAC, in which case I'd 
appreciate fuller build instructions.

bq. I think the answer for the logging libraries and htrace is to leave them 
unshaded, since it's common to want to modify logging settings and to want to 
trace through e.g. the hdfs client. Would like some feedback here.

The logging rationale here sounds good to me. We've been trying to migrate 
things to slf4j, but evidently we haven't made much progress.

Unshading HTrace also sounds good, since I'm guessing that shading will mess up 
tracing from an app into the Hadoop client. [~cmccabe] care to comment more 
authoritatively?

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626142#comment-15626142
 ] 

Ravi Prakash commented on HADOOP-13773:
---

Thanks for your contribution Fei Hui and for your careful review Yuanbo Liu!

I've committed this to branch-2 and branch-2.8. 

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13773:
--
Assignee: Fei Hui

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626155#comment-15626155
 ] 

Ravi Prakash commented on HADOOP-13773:
---

Just for future reference. I've added you to the Contributors1 role. That would 
allow you to assign issues to yourself. Also, please follow the patch naming 
scheme. All patch files should have a version. Also, if the patch is not for 
trunk, it should contain the branch name. So the file name for your patch 
should be HADOOP-13773.branch-2.01.patch. 

Thanks for your contribution and we look forward to many more from you!

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13773:
--
Status: Patch Available  (was: Open)

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.7.3, 2.6.1
>Reporter: Fei Hui
>Assignee: Fei Hui
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2016-11-01 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626245#comment-15626245
 ] 

Lei (Eddy) Xu commented on HADOOP-13650:


Thanks for the referring to the patch, [~ste...@apache.org].  I will revise the 
code based on HADOOP-13311.


For the status of this patch, the patch is almost ready, mostly waiting for 
dynamodb metadata store being committed. 

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13738) DiskChecker should perform some disk IO

2016-11-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626274#comment-15626274
 ] 

Xiaoyu Yao commented on HADOOP-13738:
-

Thanks [~arpitagarwal] for updating the patch. Patch v5 LGTM. +1.

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch, HADOOP-13738.02.patch, 
> HADOOP-13738.03.patch, HADOOP-13738.04.patch, HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2016-11-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626288#comment-15626288
 ] 

Sangjin Lee commented on HADOOP-11804:
--

Thanks for the work [~busbey]! I just did a quick test with the latest patch.

One high level concern is in terms of maintaining dependencies in the pom's. If 
a developer adds a new dependency to a module, how would that propagate to 
these client pom's? Would he/she need to add it to these client pom's for the 
most part? It wasn't entirely clear to me what that cost of maintenance is. If 
that is the only way to keep it clean, that's OK. But it would be great if that 
cost is kept to a minimum.

1.
The patch indeed does not apply for me via plain {{git apply}}: it breaks with 
{{hadoop-client/pom.xml}} and {{hadoop-maven-plugins/pom.xml}}. I did {{git 
apply --reject HADOOP-11804.1.patch}}.

2.
Once I fixed the git apply issues, I did {{mvn clean install package -Pdist 
-DskipTests -Dmaven.javadoc.skip}} and it fails right away:
{noformat}
[ERROR]   The project 
org.apache.hadoop:hadoop-client-minicluster:3.0.0-alpha2-SNAPSHOT 
(/Users/sjlee/git/hadoop-trunk/hadoop-client-modules/hadoop-client-minicluster/pom.xml)
 has 1 error
[ERROR] 'dependencies.dependency.version' for org.mortbay.jetty:jetty:jar 
is missing. @ line 266, column 17
{noformat}

I got past it by providing a version for this (chose 6.1.26).

3.
The build still fails with a couple of duplicate classes issues. One is what 
Andrew reported above. Another is duplicate jetty classes.
{noformat}
[WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed 
with message:
Duplicate classes found:

  Found in:
org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-alpha2-SNAPSHOT:compile

org.apache.hadoop:hadoop-client-minicluster:jar:3.0.0-alpha2-SNAPSHOT:compile
  Duplicate classes:
org/apache/hadoop/shaded/org/eclipse/jetty/io/ssl/SslConnection$2.class
org/apache/hadoop/shaded/org/eclipse/jetty/server/RequestLog.class
org/apache/hadoop/shaded/org/eclipse/jetty/server/ResourceCache$1.class
org/apache/hadoop/shaded/org/eclipse/jetty/util/log/AbstractLogger.class
org/apache/hadoop/shaded/org/eclipse/jetty/util/annotation/Name.class
org/apache/hadoop/shaded/org/eclipse/jetty/util/component/LifeCycle.class

org/apache/hadoop/shaded/org/eclipse/jetty/server/HttpChannel$Commit100Callback.class

org/apache/hadoop/shaded/org/eclipse/jetty/util/ssl/SslContextFactory$1.class
  ...
{noformat}

4.
Was there a significant difficulty in handing the timeline service v.2? Is it 
just the number of new dependencies we’re pulling in or the fact that there is 
a HBase dependency?

5.
Regarding the logging libraries, I agree we probably want to exclude them. 
Things like log4j properties and the way slf4j works can cause issues down the 
road if shaded.


> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch, HADOOP-11804.4.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-11-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Attachment: HADOOP-13673.00.patch

-00:
* first pass

Running the start-* and stop-* commands will fire off daemons either as the 
user they are being run as or if the effective user id is root, as the 
appropriate _user definition.  Secure daemons will "do the right thing"--get 
started as root but then switch to the appropriate user when needed. 

> Update sbin/start-* and sbin/stop-* to be smarter
> -
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13773:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Resolving since it looks like this was committed to branch-2.8.

Also it looks like this was committed without a precommit run. Let's try to be 
more careful in the future.

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.8.0
>
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626318#comment-15626318
 ] 

Hadoop QA commented on HADOOP-13773:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} HADOOP-13773 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13773 |
| GITHUB PR | https://github.com/apache/hadoop/pull/150 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10947/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.8.0
>
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-11-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626343#comment-15626343
 ] 

Allen Wittenauer edited comment on HADOOP-13673 at 11/1/16 7:00 PM:


-00:
* first pass

Running the start-* and stop-* commands will fire off daemons either as the 
user they are being run as or if the effective user id is root, as the 
appropriate _user definition.  Secure daemons will "do the right thing"--get 
started as root but then switch to the appropriate user when needed. 

At this point, the old start-secure and start-dfs are not merged.  I may do 
that in a future pass.


was (Author: aw):
-00:
* first pass

Running the start-* and stop-* commands will fire off daemons either as the 
user they are being run as or if the effective user id is root, as the 
appropriate _user definition.  Secure daemons will "do the right thing"--get 
started as root but then switch to the appropriate user when needed. 

> Update sbin/start-* and sbin/stop-* to be smarter
> -
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-11-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13673:
--
Status: Patch Available  (was: Open)

> Update sbin/start-* and sbin/stop-* to be smarter
> -
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false

2016-11-01 Thread Andres Perez (JIRA)
Andres Perez created HADOOP-13781:
-

 Summary: ZKFailoverController#initZK should use the 
ActiveStanbyElector constructor with failFast as false
 Key: HADOOP-13781
 URL: https://issues.apache.org/jira/browse/HADOOP-13781
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 3.0.0-alpha1
Reporter: Andres Perez
Priority: Minor


YARN-4243 introduced the logic that lets retry establishing the connection when 
initializing the `ActiveStandbyElector` adding the parameter `failFast`.

`ZKFailoverController#initZK` should use this constructor with `failFast` set 
to false, to let the ZFKC wait longer for the Zookeeper server be in a ready 
state when first initializing it.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-11-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626353#comment-15626353
 ] 

Allen Wittenauer commented on HADOOP-13673:
---

Argh. I'll fix the hadoop-project/pom.xml issue on 01. :(

> Update sbin/start-* and sbin/stop-* to be smarter
> -
>
> Key: HADOOP-13673
> URL: https://issues.apache.org/jira/browse/HADOOP-13673
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13673.00.patch
>
>
> As work continues on HADOOP-13397, it's become evident that we need better 
> hooks to start daemons as specifically configured users.  Via the 
> (command)_(subcommand)_USER environment variables in 3.x, we actually have a 
> standardized way to do that.  This in turn means we can make the sbin scripts 
> super functional with a bit of updating:
> * Consolidate start-dfs.sh and start-secure-dns.sh into one script
> * Make start-\*.sh and stop-\*.sh know how to switch users when run as root
> * Undeprecate start/stop-all.sh so that it could be used as root for 
> production purposes and as a single user for non-production users



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false

2016-11-01 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HADOOP-13781:
--
Description: 
YARN-4243 introduced the logic that lets retry establishing the connection when 
initializing the {{ActiveStandbyElector}} adding the parameter {{failFast}}.

{{ZKFailoverController#initZK}} should use this constructor with {{failFast}} 
set to false, to let the ZFKC wait longer for the Zookeeper server be in a 
ready state when first initializing it.


  was:
YARN-4243 introduced the logic that lets retry establishing the connection when 
initializing the `ActiveStandbyElector` adding the parameter `failFast`.

`ZKFailoverController#initZK` should use this constructor with `failFast` set 
to false, to let the ZFKC wait longer for the Zookeeper server be in a ready 
state when first initializing it.



> ZKFailoverController#initZK should use the ActiveStanbyElector constructor 
> with failFast as false
> -
>
> Key: HADOOP-13781
> URL: https://issues.apache.org/jira/browse/HADOOP-13781
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 3.0.0-alpha1
>Reporter: Andres Perez
>Priority: Minor
>
> YARN-4243 introduced the logic that lets retry establishing the connection 
> when initializing the {{ActiveStandbyElector}} adding the parameter 
> {{failFast}}.
> {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} 
> set to false, to let the ZFKC wait longer for the Zookeeper server be in a 
> ready state when first initializing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false

2016-11-01 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HADOOP-13781:
--
Attachment: HADOOP-13781.patch

> ZKFailoverController#initZK should use the ActiveStanbyElector constructor 
> with failFast as false
> -
>
> Key: HADOOP-13781
> URL: https://issues.apache.org/jira/browse/HADOOP-13781
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 3.0.0-alpha1
>Reporter: Andres Perez
>Priority: Minor
> Attachments: HADOOP-13781.patch
>
>
> YARN-4243 introduced the logic that lets retry establishing the connection 
> when initializing the {{ActiveStandbyElector}} adding the parameter 
> {{failFast}}.
> {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} 
> set to false, to let the ZFKC wait longer for the Zookeeper server be in a 
> ready state when first initializing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13771) Adding group mapping lookup utility without dependency on HDFS namenode

2016-11-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626392#comment-15626392
 ] 

Xiaoyu Yao commented on HADOOP-13771:
-

Thanks [~aw] for providing the details of "hdfs groups". That well explained 
why we have "hdfs groups" instead of "hadoop groups" today.

bq. There's an additional wrinkle here. The NN is not the only process that is 
doing group resolution. Pretty much any service that does ACL resolution also 
does group resolution to some degree. Making the command 'hadoop groups' is 
going lead some folks to think that this works for any service... 

Agree. Based on your description, I would prefer keep "hdfs groups" as-is today 
instead of replacing "hdfs groups". 

How about expose this as a DEBUG tool only? Below are some choices here:
1) run only with class Main only, no CLI exposed.
hadoop org.apache.hadoop.security.Groups 

2) Add "hadoop groups" which wraps 1) in script, less ideal as you mentioned 
above. 

3) Add "hdfs debug groups" which wrapps 1) in script. 
Explicitly mention the result is solely based on the configurations from 
core-site.xml configurations. 
It is authoritative compared with "hdfs groups"

bq. I'd therefore propose a different solution. 'hdfs groups' should work like 
nslookup. If the NN is up, it should query the NN and give an authoritative 
answer. If the NN is not up, it should give the local answer but be absolutely 
clear that it is at best a guess and may be in correct.

This proposal looks good to me as well. MR and HDFS share a common base for the 
"group" lookup. This will change the group lookup for both HDFS and MR. 

> Adding group mapping lookup utility without dependency on HDFS namenode
> ---
>
> Key: HADOOP-13771
> URL: https://issues.apache.org/jira/browse/HADOOP-13771
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, tools
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13771.00.patch
>
>
> We have {{hdfs groups}} command to troubleshoot issues related to users' 
> group member look up with Unix/LDAP. However, there are some limitation of 
> this command: 1) it can only be executed when namenode is running. 2) any 
> change in the group mapping lookup configuration needs a hdfs namenode 
> restart, which is expensive. 
> This ticket is proposed to have a simple CLI utility like HadoopKerberosName
> {code}
> hadoop org.apache.hadoop.security.HadoopKerberosName 
> nn/localh...@hdpdev.dev.com
> {code}
> The CLI utility for group member lookup will have a usage like below without 
> namenode running or restart for configuration change.
> {code}
> hadoop org.apache.hadoop.security.Groups hdfs
> hdfs : [hadoop, hdfs]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13771) Adding group mapping lookup utility without dependency on HDFS namenode

2016-11-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626413#comment-15626413
 ] 

Allen Wittenauer commented on HADOOP-13771:
---

What I had in my head is that if 'hdfs groups' can't contact the NN after a 
much shorter timeout, it would then run locally and provide an answer with the 
additional text of "(non-authoritative)" or something else in the output.

> Adding group mapping lookup utility without dependency on HDFS namenode
> ---
>
> Key: HADOOP-13771
> URL: https://issues.apache.org/jira/browse/HADOOP-13771
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, tools
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13771.00.patch
>
>
> We have {{hdfs groups}} command to troubleshoot issues related to users' 
> group member look up with Unix/LDAP. However, there are some limitation of 
> this command: 1) it can only be executed when namenode is running. 2) any 
> change in the group mapping lookup configuration needs a hdfs namenode 
> restart, which is expensive. 
> This ticket is proposed to have a simple CLI utility like HadoopKerberosName
> {code}
> hadoop org.apache.hadoop.security.HadoopKerberosName 
> nn/localh...@hdpdev.dev.com
> {code}
> The CLI utility for group member lookup will have a usage like below without 
> namenode running or restart for configuration change.
> {code}
> hadoop org.apache.hadoop.security.Groups hdfs
> hdfs : [hadoop, hdfs]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12804) Read Proxy Password from Credential Providers in S3 FileSystem

2016-11-01 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626420#comment-15626420
 ] 

Larry McCay commented on HADOOP-12804:
--

[~ste...@apache.org], [~cnauroth] - can I get a review for this issue patch?
Thanks!

> Read Proxy Password from Credential Providers in S3 FileSystem
> --
>
> Key: HADOOP-12804
> URL: https://issues.apache.org/jira/browse/HADOOP-12804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Larry McCay
>Assignee: Larry McCay
>Priority: Minor
> Attachments: HADOOP-12804-001.patch, HADOOP-12804-003.patch, 
> HADOOP-12804-004.patch, HADOOP-12804-005.patch, 
> HADOOP-12804-branch-2-002.patch, HADOOP-12804-branch-2-003.patch
>
>
> HADOOP-12548 added credential provider support for the AWS credentials to 
> S3FileSystem. This JIRA is for considering the use of the credential 
> providers for the proxy password as well.
> Instead of adding the proxy password to the config file directly and in clear 
> text, we could provision it in addition to the AWS credentials into a 
> credential provider and keep it out of clear text.
> In terms of usage, it could be added to the same credential store as the AWS 
> credentials or potentially to a more universally available path - since it is 
> the same for everyone. This would however require multiple providers to be 
> configured in the provider.path property and more open file permissions on 
> the store itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13717) Normalize daemonization behavior of the diskbalancer with balancer and mover

2016-11-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626431#comment-15626431
 ] 

Allen Wittenauer commented on HADOOP-13717:
---

+1

> Normalize daemonization behavior of the diskbalancer with balancer and mover
> 
>
> Key: HADOOP-13717
> URL: https://issues.apache.org/jira/browse/HADOOP-13717
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13717.001.patch
>
>
> Issue found when working with the HDFS balancer.
> In {{hadoop_daemon_handler}}, it calls {{hadoop_verify_logdir}} even for the 
> "default" case which calls {{hadoop_start_daemon}}. {{daemon_outfile}} which 
> specifies the log location isn't even used here, since the command is being 
> started in the foreground.
> I think we can push the {{hadoop_verify_logdir}} call down into 
> {{hadoop_start_daemon_wrapper}} instead, which does use the outfile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13632) Daemonization does not check process liveness before renicing

2016-11-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626450#comment-15626450
 ] 

Allen Wittenauer commented on HADOOP-13632:
---

The hadoop_status_daemon check in hadoop_start_secure_daemon_wrapper is 
checking the wrong pid file. $pidfile hasn't been defined.

> Daemonization does not check process liveness before renicing
> -
>
> Key: HADOOP-13632
> URL: https://issues.apache.org/jira/browse/HADOOP-13632
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13632.001.patch
>
>
> If you try to daemonize a process that is incorrectly configured, it will die 
> quite quickly. However, the daemonization function will still try to renice 
> it even if it's down, leading to something like this for my namenode:
> {noformat}
> -> % bin/hdfs --daemon start namenode
> ERROR: Cannot set priority of namenode process 12036
> {noformat}
> It'd be more user-friendly instead of this renice error, we said that the 
> process couldn't be started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8500) Fix javadoc jars to not contain entire target directory

2016-11-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626465#comment-15626465
 ] 

Allen Wittenauer commented on HADOOP-8500:
--

{code}
$ git status
On branch h13397
Untracked files:
  (use "git add ..." to include in what will be committed)

hadoop-common-project/hadoop-common/api/
hadoop-hdfs-project/hadoop-hdfs-client/api/
hadoop-hdfs-project/hadoop-hdfs/api/
{code}

:(

> Fix javadoc jars to not contain entire target directory
> ---
>
> Key: HADOOP-8500
> URL: https://issues.apache.org/jira/browse/HADOOP-8500
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.0-alpha
> Environment: N/A
>Reporter: EJ Ciramella
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-8500.001.patch, HADOOP-8500.002.patch, 
> HADOOP-8500.003.patch, HADOOP-8500.patch, site-redo.tar
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The javadoc jars contain the contents of the target directory - which 
> includes classes and all sorts of binary files that it shouldn't.
> Sometimes the resulting javadoc jar is 10X bigger than it should be.
> The fix is to reconfigure maven to use "api" as it's destDir for javadoc 
> generation.
> I have a patch/diff incoming.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13453) S3Guard: Instrument new functionality with Hadoop metrics.

2016-11-01 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13453:
---
Assignee: Ai Deng

> S3Guard: Instrument new functionality with Hadoop metrics.
> --
>
> Key: HADOOP-13453
> URL: https://issues.apache.org/jira/browse/HADOOP-13453
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Ai Deng
>
> Provide Hadoop metrics showing operational details of the S3Guard 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13667) Fix typing mistake of inline document in hadoop-metrics2.properties

2016-11-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626570#comment-15626570
 ] 

Daniel Templeton commented on HADOOP-13667:
---

Thanks for the patch, [~demongaorui].  Looks good to me, but I don't know much 
about exporting the metrics to Ganglia.  [~andrew.wang], can you take a look or 
recommend a reviewer?

> Fix typing mistake of inline document in hadoop-metrics2.properties
> ---
>
> Key: HADOOP-13667
> URL: https://issues.apache.org/jira/browse/HADOOP-13667
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Rui Gao
>Assignee: Rui Gao
> Attachments: HADOOP-13667.2.patch, HADOOP-13667.patch
>
>
> Fix typing mistake of inline document in hadoop-metrics2.properties.
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcesName{code} should be 
> {code}#*.sink.ganglia.tagsForPrefix.jvm=ProcessName{code}
> And also could add examples into the inline document for easier understanding 
> of metrics tag related configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13427) Eliminate needless uses of FileSystem.exists, isFile, isDirectory

2016-11-01 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13427:
---
Status: Patch Available  (was: Open)

> Eliminate needless uses of FileSystem.exists, isFile, isDirectory 
> --
>
> Key: HADOOP-13427
> URL: https://issues.apache.org/jira/browse/HADOOP-13427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13427-001.patch, HADOOP-13427-002.patch
>
>
> We're cleaning up Hive and Spark's use of FileSystem.exists, because it is 
> often the case we see code of exists+open, exists+delete, when the exists 
> probe is needless. Against object stores, expensive needless.
> Hadoop can set an example here by stripping them out. It will also show where 
> there are opportunities to optimise things better and/or improve reporting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13427) Eliminate needless uses of FileSystem.exists, isFile, isDirectory

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626656#comment-15626656
 ] 

Hadoop QA commented on HADOOP-13427:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-13427 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13427 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830700/HADOOP-13427-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10949/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Eliminate needless uses of FileSystem.exists, isFile, isDirectory 
> --
>
> Key: HADOOP-13427
> URL: https://issues.apache.org/jira/browse/HADOOP-13427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13427-001.patch, HADOOP-13427-002.patch
>
>
> We're cleaning up Hive and Spark's use of FileSystem.exists, because it is 
> often the case we see code of exists+open, exists+delete, when the exists 
> probe is needless. Against object stores, expensive needless.
> Hadoop can set an example here by stripping them out. It will also show where 
> there are opportunities to optimise things better and/or improve reporting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false

2016-11-01 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HADOOP-13781:
--
Status: Patch Available  (was: Open)

> ZKFailoverController#initZK should use the ActiveStanbyElector constructor 
> with failFast as false
> -
>
> Key: HADOOP-13781
> URL: https://issues.apache.org/jira/browse/HADOOP-13781
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 3.0.0-alpha1
>Reporter: Andres Perez
>Priority: Minor
> Attachments: HADOOP-13781.patch
>
>
> YARN-4243 introduced the logic that lets retry establishing the connection 
> when initializing the {{ActiveStandbyElector}} adding the parameter 
> {{failFast}}.
> {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} 
> set to false, to let the ZFKC wait longer for the Zookeeper server be in a 
> ready state when first initializing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13583) Incorporate checkcompatibility script which runs Java API Compliance Checker

2016-11-01 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626673#comment-15626673
 ] 

Robert Kanter commented on HADOOP-13583:


+1 after these two trivial things:
- 2 lines with tabs reported by Jenkins
- checkcompatibility.py should have executable permissions like the other 
programs in the bin dir, and so you can actually run it out of the box :)

> Incorporate checkcompatibility script which runs Java API Compliance Checker
> 
>
> Key: HADOOP-13583
> URL: https://issues.apache.org/jira/browse/HADOOP-13583
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13583.001.patch, HADOOP-13583.002.patch, 
> HADOOP-13583.003.patch
>
>
> Based on discussion at YETUS-445, this code can't go there, but it's still 
> very useful for release managers. A similar variant of this script has been 
> used for a while by Apache HBase and Apache Kudu, and IMO JACC output is 
> easier to understand than JDiff.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13782) Make MutableRates metrics thread-local write, aggregate-on-read

2016-11-01 Thread Erik Krogen (JIRA)
Erik Krogen created HADOOP-13782:


 Summary: Make MutableRates metrics thread-local write, 
aggregate-on-read
 Key: HADOOP-13782
 URL: https://issues.apache.org/jira/browse/HADOOP-13782
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Reporter: Erik Krogen
Assignee: Erik Krogen


Currently the {{MutableRates}} metrics class serializes all writes to metrics 
it contains because of its use of {{MetricsRegistry.add()}} (i.e., even two 
increments of unrelated metrics contained within the same {{MutableRates}} 
object will serialize w.r.t. each other). This class is used by 
{{RpcDetailedMetrics}}, which may have many hundreds of threads contending to 
modify these metrics. Instead we should allow updates to unrelated metrics 
objects to happen concurrently. To do so we can let each thread locally collect 
metrics, and on a {{snapshot}}, aggregate the metrics from all of the threads. 

I have collected some benchmark performance numbers in HADOOP-13747 
(https://issues.apache.org/jira/secure/attachment/12835043/benchmark_results) 
which indicate that this can bring significantly higher performance in high 
contention situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13427) Eliminate needless uses of FileSystem.exists, isFile, isDirectory

2016-11-01 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13427:
---
Attachment: HADOOP-13427.003.patch

+1 the v2 patch.

I rebased the patch from {{trunk}} branch as the v2 patch can not apply 
cleanly. It may be interesting if [~ste...@apache.org] can +1 for your own 
patch, see [#HADOOP-13427.003.patch].

I don't quite get the following code snippet:
{code:title=SwiftTestUtils.java}
496   throw (IOException)new FileNotFoundException(message + ": not 
found "
497   + path + " in " + path.getParent() + ": " + e + " -- "
498+ ls(fileSystem, path.getParent())).initCause(e);
{code}
and
{code:title=FileSystemApplicationHistoryStore.java}
658 try {
659   fs.getFileStatus(applicationHistoryFile);
660 } catch (FileNotFoundException e) {
661   throw (FileNotFoundException) new FileNotFoundException
662   ("History file for application " + appId + " is not found: " 
+ e)
663   .initCause(e);
{code}

> Eliminate needless uses of FileSystem.exists, isFile, isDirectory 
> --
>
> Key: HADOOP-13427
> URL: https://issues.apache.org/jira/browse/HADOOP-13427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13427-001.patch, HADOOP-13427-002.patch, 
> HADOOP-13427.003.patch
>
>
> We're cleaning up Hive and Spark's use of FileSystem.exists, because it is 
> often the case we see code of exists+open, exists+delete, when the exists 
> probe is needless. Against object stores, expensive needless.
> Hadoop can set an example here by stripping them out. It will also show where 
> there are opportunities to optimise things better and/or improve reporting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13427) Eliminate needless uses of FileSystem.exists, isFile, isDirectory

2016-11-01 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13427:
---
Priority: Major  (was: Minor)

> Eliminate needless uses of FileSystem.exists, isFile, isDirectory 
> --
>
> Key: HADOOP-13427
> URL: https://issues.apache.org/jira/browse/HADOOP-13427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13427-001.patch, HADOOP-13427-002.patch, 
> HADOOP-13427.003.patch
>
>
> We're cleaning up Hive and Spark's use of FileSystem.exists, because it is 
> often the case we see code of exists+open, exists+delete, when the exists 
> probe is needless. Against object stores, expensive needless.
> Hadoop can set an example here by stripping them out. It will also show where 
> there are opportunities to optimise things better and/or improve reporting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13427) Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}

2016-11-01 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13427:
---
Summary: Eliminate needless uses of FileSystem#{exists(), isFile(), 
isDirectory()}  (was: Eliminate needless uses of FileSystem.exists, isFile, 
isDirectory )

> Eliminate needless uses of FileSystem#{exists(), isFile(), isDirectory()}
> -
>
> Key: HADOOP-13427
> URL: https://issues.apache.org/jira/browse/HADOOP-13427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13427-001.patch, HADOOP-13427-002.patch, 
> HADOOP-13427.003.patch
>
>
> We're cleaning up Hive and Spark's use of FileSystem.exists, because it is 
> often the case we see code of exists+open, exists+delete, when the exists 
> probe is needless. Against object stores, expensive needless.
> Hadoop can set an example here by stripping them out. It will also show where 
> there are opportunities to optimise things better and/or improve reporting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626772#comment-15626772
 ] 

Hadoop QA commented on HADOOP-13781:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 20s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13781 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836426/HADOOP-13781.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8541d46fa4f8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 76893a4 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10950/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10950/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10950/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ZKFailoverController#initZK should use the ActiveStanbyElector constructor 
> with failFast as false
> -
>
> Key: HADOOP-13781
>  

[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626805#comment-15626805
 ] 

Ravi Prakash commented on HADOOP-13773:
---

Andrew! The precommit was known to fail because the patch applied only to 
branch-2 (not trunk)

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.8.0
>
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13583) Incorporate checkcompatibility script which runs Java API Compliance Checker

2016-11-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626818#comment-15626818
 ] 

Andrew Wang commented on HADOOP-13583:
--

Thanks for reviewing Robert. The two tabs are example output in a comment for 
some parsing code, so I'd prefer not to change it. I can chmod +x the file 
before commit.

Robert, this good with you?

> Incorporate checkcompatibility script which runs Java API Compliance Checker
> 
>
> Key: HADOOP-13583
> URL: https://issues.apache.org/jira/browse/HADOOP-13583
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13583.001.patch, HADOOP-13583.002.patch, 
> HADOOP-13583.003.patch
>
>
> Based on discussion at YETUS-445, this code can't go there, but it's still 
> very useful for release managers. A similar variant of this script has been 
> used for a while by Apache HBase and Apache Kudu, and IMO JACC output is 
> easier to understand than JDiff.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626832#comment-15626832
 ] 

Andrew Wang commented on HADOOP-13773:
--

Hi Ravi, precommit is multi-branch now, if you follow the naming convention of 
e.g. HADOOP-13733.branch-2.001.patch. See:

https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch

The precommit bot will also detect the branch correctly for github PRs, though 
if there's both a PR and a patch (as on this JIRA), the precommit bot will 
always use the PR. When this has happened in the past, I've resorted to filing 
a new JIRA to unstick precommit.

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.8.0
>
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13738) DiskChecker should perform some disk IO

2016-11-01 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626837#comment-15626837
 ] 

Arpit Agarwal commented on HADOOP-13738:


Thanks for the review [~xyao].

[~kihwal], it didn't sound like you had any objections to the proposed approach 
but I'll wait until the end of this week before committing. 

Filed HDFS-11086 to separately address some more improvements in DN's use of 
DiskChecker. This will address the check taking minutes failure case you 
brought up.

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch, HADOOP-13738.02.patch, 
> HADOOP-13738.03.patch, HADOOP-13738.04.patch, HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13738) DiskChecker should perform some disk IO

2016-11-01 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626847#comment-15626847
 ] 

Kihwal Lee commented on HADOOP-13738:
-

+1

> DiskChecker should perform some disk IO
> ---
>
> Key: HADOOP-13738
> URL: https://issues.apache.org/jira/browse/HADOOP-13738
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-13738.01.patch, HADOOP-13738.02.patch, 
> HADOOP-13738.03.patch, HADOOP-13738.04.patch, HADOOP-13738.05.patch
>
>
> DiskChecker can fail to detect total disk/controller failures indefinitely. 
> We have seen this in real clusters. DiskChecker performs simple 
> permissions-based checks on directories which do not guarantee that any disk 
> IO will be attempted.
> A simple improvement is to write some data and flush it to the disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13773) wrong HADOOP_CLIENT_OPTS in hadoop-env on branch-2

2016-11-01 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626888#comment-15626888
 ] 

Ravi Prakash commented on HADOOP-13773:
---

Awesome! Sounds good Andrew! Thanks. I'll take care in the future. 

> wrong HADOOP_CLIENT_OPTS in hadoop-env  on branch-2
> ---
>
> Key: HADOOP-13773
> URL: https://issues.apache.org/jira/browse/HADOOP-13773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.1, 2.7.3
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 2.8.0
>
> Attachments: HADOOP-13773.patch
>
>
> in conf/hadoop-env.sh,
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> when i set HADOOP_HEAPSIZE ,and run 'hadoop jar ...', jvm args is not work.
> i see, in bin/hadoop,
> exec "$JAVA" $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@"
> HADOOP_OPTS is behind JAVA_HEAP_MAX, so HADOOP_HEAPSIZE is not work.
> for example i run 'HADOOP_HEAPSIZE=1024 hadoop jar ...' , the java process is 
> 'java -Xmx1024m ... -Xmx512m...', then Xmx512m is valid, and Xmx1024m is 
> invalid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13781) ZKFailoverController#initZK should use the ActiveStanbyElector constructor with failFast as false

2016-11-01 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated HADOOP-13781:
--
Attachment: HADOOP-13781.2.patch

The {{ActiveStandbyElector#reEstablishSession}} method does not throws 
{{IOException}} nor {{KeeperException}} making the Zookeeper connection fail 
silently when invoking {{ZKFailoverController#initZK}} making the Unit test 
fail when the {{failFast}} paremeter is set to {{false}}

> ZKFailoverController#initZK should use the ActiveStanbyElector constructor 
> with failFast as false
> -
>
> Key: HADOOP-13781
> URL: https://issues.apache.org/jira/browse/HADOOP-13781
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 3.0.0-alpha1
>Reporter: Andres Perez
>Priority: Minor
> Attachments: HADOOP-13781.2.patch, HADOOP-13781.patch
>
>
> YARN-4243 introduced the logic that lets retry establishing the connection 
> when initializing the {{ActiveStandbyElector}} adding the parameter 
> {{failFast}}.
> {{ZKFailoverController#initZK}} should use this constructor with {{failFast}} 
> set to false, to let the ZFKC wait longer for the Zookeeper server be in a 
> ready state when first initializing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2016-11-01 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13650:
---
Attachment: HADOOP-13650-HADOOP-13345.000.patch

Uploaded a working-in-progress patch.  The patch requires that the 
{{MetadataStore}} interface and {{DynamoDB}} metadata store to be ready.

[~liuml07], One question I have in the implementation is that, for 
{{initialize()}} / {{destroy}} functions, can we provide a version of such 
functions that do not take {{S3FIleSystem}} as parameters (i.e., taking 
{{Configuration}} instead)?  Providing s3a path in {{s3a s3guard init -m  
s3a://bucket/path}} seems be not necessary.

[~ste...@apache.org] I've added a new {{s3a}} command in this patch. 

Additionally, I proposed to use {{URI}} as metadata store table name, so that 
we can support {{dynamodb://}}, {{mysql://}}, {{local://}} and etc. 


> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13673) Update sbin/start-* and sbin/stop-* to be smarter

2016-11-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626912#comment-15626912
 ] 

Hadoop QA commented on HADOOP-13673:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 2 new + 75 unchanged - 0 fixed = 
77 total (was 75) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 10s{color} | {color:orange} The patch generated 8 new + 122 unchanged - 2 
fixed = 130 total (was 124) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m  7s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}211m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13673 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836422/HADOOP-13673.00.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  xml  |
| uname | Linux 02e23a536e4f 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revi

[jira] [Commented] (HADOOP-13583) Incorporate checkcompatibility script which runs Java API Compliance Checker

2016-11-01 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626926#comment-15626926
 ] 

Robert Kanter commented on HADOOP-13583:


Sounds good to me.

> Incorporate checkcompatibility script which runs Java API Compliance Checker
> 
>
> Key: HADOOP-13583
> URL: https://issues.apache.org/jira/browse/HADOOP-13583
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13583.001.patch, HADOOP-13583.002.patch, 
> HADOOP-13583.003.patch
>
>
> Based on discussion at YETUS-445, this code can't go there, but it's still 
> very useful for release managers. A similar variant of this script has been 
> used for a while by Apache HBase and Apache Kudu, and IMO JACC output is 
> easier to understand than JDiff.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13783) Improve efficiency of WASB over page blobs

2016-11-01 Thread NITIN VERMA (JIRA)
NITIN VERMA created HADOOP-13783:


 Summary: Improve efficiency of WASB over page blobs
 Key: HADOOP-13783
 URL: https://issues.apache.org/jira/browse/HADOOP-13783
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure
Reporter: NITIN VERMA


1)  Add telemetry to WASB driver. WASB driver is lack of any log or 
telemetry which makes trouble shoot very difficult. For example, we don’t know 
where is high e2e latency between HBase and Azure storage came from when Azure 
storage server latency was very low. Also we don’t know why WASB can only do 
166 IOPs which is way below azure storage 500 IOPs. And we had several 
incidents before related to storage latency, because of lacking logs, we 
couldn’t find the ownership of the incident quickly.

2)  Resolving the hot spotting issue when WASB driver partition the azure 
page blobs by changing the key. Current key design is causing the hot spotting 
on azure storage. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13736) Change PathMetadata to hold S3AFileStatus instead of FileStatus.

2016-11-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626981#comment-15626981
 ] 

Mingliang Liu commented on HADOOP-13736:


I'm holding on commit. Thanks,

> Change PathMetadata to hold S3AFileStatus instead of FileStatus.
> 
>
> Key: HADOOP-13736
> URL: https://issues.apache.org/jira/browse/HADOOP-13736
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13736-HADOOP-13345.000.patch, 
> HADOOP-13736-HADOOP-13345.001.patch, HADOOP-13736.000.patch, 
> HADOOP-13736.wip-01.patch
>
>
> {{S3AFileStatus}} is implemented differently with {{FileStatus}}, for 
> instance {{S3AFileStatus#isEmptyDirectory()}} is not implemented in 
> {{FileStatus()}}. And {{access_time}}, {{block_replication}}, {{owner}}, 
> {{group}} and a few other fields are not meaningful in {{S3AFileStatus}}.  
> So in the scope of {{S3guard}}, it should use {{S3AFileStatus}} in  instead 
> of {{FileStatus}} in {{PathMetadaa}} to avoid casting the types back and 
> forth in S3A. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13650) S3Guard: Provide command line tools to manipulate metadata store.

2016-11-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626982#comment-15626982
 ] 

Mingliang Liu commented on HADOOP-13650:


Thanks for the nice work, [~eddyxu]!

{quote}
One question I have in the implementation is that, for initialize() / destroy 
functions, can we provide a version of such functions that do not take 
S3FIleSystem as parameters (i.e., taking Configuration instead)? Providing s3a 
path in s3a s3guard init -m  s3a://bucket/path seems be not necessary.
{quote}
Basically we can create a S3AFileSystem via URI (bucket) and conf; and I 
thought we could initialize the s3a fs object anywhere.

The DDBMetadataStore needs the region, bucket (default table name), AWS keys 
etc, among which the bucket is not specified in the configuration. Newly added 
config key DDB_TABLE_NAME_KEY can be helpful in this case. In short, I think 
it's possible to achieve this. Pssing s3a URI along with the MetadataStore URI 
seems not necessary to me either in the CLI tool. You can assume I can 
construct a DDBMetadataStore using configuration simply. I'll let you know if I 
can't. This will be considered in the [HADOOP-13449] v2 patch, on its way.

I'll come back to review this patch this week.

> S3Guard: Provide command line tools to manipulate metadata store.
> -
>
> Key: HADOOP-13650
> URL: https://issues.apache.org/jira/browse/HADOOP-13650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-13650-HADOOP-13345.000.patch
>
>
> Similar systems like EMRFS has the CLI tools to manipulate the metadata 
> store, i.e., create or delete metadata store, or {{import}}, {{sync}} the 
> file metadata between metadata store and S3. 
> http://docs.aws.amazon.com//ElasticMapReduce/latest/ReleaseGuide/emrfs-cli-reference.html
> S3Guard should offer similar functionality. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13783) Improve efficiency of WASB over page blobs

2016-11-01 Thread NITIN VERMA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626985#comment-15626985
 ] 

NITIN VERMA commented on HADOOP-13783:
--

I don't have permission to assign this JIRA. Could someone assign this to me? 

> Improve efficiency of WASB over page blobs
> --
>
> Key: HADOOP-13783
> URL: https://issues.apache.org/jira/browse/HADOOP-13783
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Reporter: NITIN VERMA
>
> 1)Add telemetry to WASB driver. WASB driver is lack of any log or 
> telemetry which makes trouble shoot very difficult. For example, we don’t 
> know where is high e2e latency between HBase and Azure storage came from when 
> Azure storage server latency was very low. Also we don’t know why WASB can 
> only do 166 IOPs which is way below azure storage 500 IOPs. And we had 
> several incidents before related to storage latency, because of lacking logs, 
> we couldn’t find the ownership of the incident quickly.
> 2)Resolving the hot spotting issue when WASB driver partition the azure 
> page blobs by changing the key. Current key design is causing the hot 
> spotting on azure storage. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12453) Support decoding KMS Delegation Token with its own Identifier

2016-11-01 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15626987#comment-15626987
 ] 

Xiao Chen commented on HADOOP-12453:


Hi [~xyao],
Thanks for reporting this and providing a patch. Looks great overall!

A couple of comments/questions:
- It's been a while so needs a rebase. Can remove KMSCP's new 
{{TOKEN_KIND_STR}} too.
- Should this really go to hdfs's services, as opposed to common?
- Can this work with just the KMSDelegationTokenIdentifier class (without the 
outer KMSDelegationToken class)?

If you're busy, I'd be happy to continue to work on this.

> Support decoding KMS Delegation Token with its own Identifier
> -
>
> Key: HADOOP-12453
> URL: https://issues.apache.org/jira/browse/HADOOP-12453
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Affects Versions: 2.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12453.00.patch
>
>
> kms-dt currently does not have its own token identifier class to decode it 
> properly as shown in the HDFS logs below. This JIRA is opened to add support 
> for that.
> {code}
> 2015-09-30 
> 22:36:14,379|beaver.machine|INFO|5619|140004068153152|MainThread|15/09/30 
> 22:36:14 WARN token.Token: Cannot find class for token kind kms-dt
> 2015-09-30 
> 22:36:14,380|beaver.machine|INFO|5619|140004068153152|MainThread|15/09/30 
> 22:36:14 INFO security.TokenCache: Got dt for 
> hdfs://tde-hdfs-3.novalocal:8020; Kind: kms-dt, Service: 172.22.64.179:9292, 
> Ident: 00 06 68 72 74 5f 71 61 02 72 6d 00 8a 01 50 20 66 1c a3 8a 01 50 44 
> 72 a0 a3 0f 03
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >