[jira] [Commented] (HADOOP-11541) Raw XOR coder

2015-02-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311902#comment-14311902
 ] 

Kai Zheng commented on HADOOP-11541:


Yes, I've done so already that way. Thanks Uma, and Yi.

> Raw XOR coder
> -
>
> Key: HADOOP-11541
> URL: https://issues.apache.org/jira/browse/HADOOP-11541
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11541-v1.patch, HADOOP-11541-v2.patch
>
>
> This will implement XOR codes by porting the codes from HDFS-RAID. The coder 
> in the algorithm is needed by some high level codecs like LRC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11541) Raw XOR coder

2015-02-08 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311843#comment-14311843
 ] 

Uma Maheswara Rao G commented on HADOOP-11541:
--

We need not really have a JIRA for changes.txt alone. You can just go ahead and 
committ changes.txt file with changes next time.

> Raw XOR coder
> -
>
> Key: HADOOP-11541
> URL: https://issues.apache.org/jira/browse/HADOOP-11541
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11541-v1.patch, HADOOP-11541-v2.patch
>
>
> This will implement XOR codes by porting the codes from HDFS-RAID. The coder 
> in the algorithm is needed by some high level codecs like LRC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-08 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-11512:
-
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-08 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-11512:
-
Attachment: HADOOP-11512.patch

Thanks Ryan. I've attached a cleaned up version that removes the extra changes 
to indentation made on some unrelated lines.

+1, will commit after Jenkins runs through this.

I also attached a tested branch-2 variant as the back-port was not 
straight-forward.

> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-08 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-11512:
-
Attachment: HADOOP-11512.branch-2.patch

> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Attachments: HADOOP-11512.branch-2.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11510) Expose truncate API via FileContext

2015-02-08 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11510:

Attachment: HADOOP-11510.003.patch

Update the patch, [~shv] please take a look whether it addresses your comments, 
thanks.

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch, 
> HADOOP-11510.003.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10797) Hardcoded path to "bash" is not portable

2015-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311799#comment-14311799
 ] 

Hadoop QA commented on HADOOP-10797:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12654572/bash.patch
  against trunk revision 1382ae5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-httpfs.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5632//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5632//console

This message is automatically generated.

> Hardcoded path to "bash" is not portable
> 
>
> Key: HADOOP-10797
> URL: https://issues.apache.org/jira/browse/HADOOP-10797
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.4.1
>Reporter: Dmitry Sivachenko
> Attachments: bash.patch
>
>
> Most of shell scripts use shebang ling in the following format:
> #!/usr/bin/env bash
> But some scripts contain hardcoded "/bin/bash" which is not portable.
> Please use #!/usr/bin/env bash instead for portability.
> PS: it would be much better to switch to standard Bourne Shell /bin/sh, do 
> these scripts really need bash?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-02-08 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311700#comment-14311700
 ] 

Yi Liu commented on HADOOP-11510:
-

{quote}
can we move testTruncateThroughFileContext() into 
TestHDFSFileContextMainOperations. Looks like a dedicated test for FileContext
{quote}
Yes, Thanks [~shv]. Actually I had this plan to do this later today, I also 
thought it's good to have the test in {{TestHDFSFileContextMainOperations}} and 
guessed you would have this comment :)

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11510) Expose truncate API via FileContext

2015-02-08 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311697#comment-14311697
 ] 

Konstantin Shvachko commented on HADOOP-11510:
--

This looks good to me. One thing only: can we move 
{{testTruncateThroughFileContext()}} into 
{{TestHDFSFileContextMainOperations}}. Looks like a dedicated test for 
FileContext. It has its own routines for creating files via FC, which may 
simplify the truncate test a bit.

> Expose truncate API via FileContext
> ---
>
> Key: HADOOP-11510
> URL: https://issues.apache.org/jira/browse/HADOOP-11510
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Yi Liu
>Assignee: Yi Liu
> Attachments: HADOOP-11510.001.patch, HADOOP-11510.002.patch
>
>
> We also need to expose truncate API via {{org.apache.hadoop.fs.FileContext}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11565) Add --slaves shell option

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11565:
--
Summary: Add --slaves shell option  (was: Add --batch shell option)

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
>
> Add a --batch shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11565) Add --slaves shell option

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11565:
--
Description: Add a --slaves shell option to hadoop-config.sh to trigger the 
given command on slave nodes.  This is required to deprecate hadoop-daemons.sh 
and yarn-daemons.sh.  (was: Add a --batch shell option to hadoop-config.sh to 
trigger the given command on slave nodes.  This is required to deprecate 
hadoop-daemons.sh and yarn-daemons.sh.)

> Add --slaves shell option
> -
>
> Key: HADOOP-11565
> URL: https://issues.apache.org/jira/browse/HADOOP-11565
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Allen Wittenauer
>
> Add a --slaves shell option to hadoop-config.sh to trigger the given command 
> on slave nodes.  This is required to deprecate hadoop-daemons.sh and 
> yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11565) Add --batch shell option

2015-02-08 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11565:
-

 Summary: Add --batch shell option
 Key: HADOOP-11565
 URL: https://issues.apache.org/jira/browse/HADOOP-11565
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Reporter: Allen Wittenauer


Add a --batch shell option to hadoop-config.sh to trigger the given command on 
slave nodes.  This is required to deprecate hadoop-daemons.sh and 
yarn-daemons.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11539) add Apache Thrift support to hadoop-maven-plugins

2015-02-08 Thread John (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311691#comment-14311691
 ] 

John commented on HADOOP-11539:
---

"hbase-thrift" and "hive-service" are the use cases.  And if we reuse the code 
of hadoop-maven-plugins, their build will be more easy to maintain.


> add Apache Thrift support to hadoop-maven-plugins
> -
>
> Key: HADOOP-11539
> URL: https://issues.apache.org/jira/browse/HADOOP-11539
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.6.0
>Reporter: John
>Assignee: John
> Attachments: HADOOP-11539.patch
>
>
> Make generating java code from thrift IDL more easy if there are massive 
> input files.  Good news for hbase-thrift, hive-service and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11563) Add the missed entry for CHANGES.txt

2015-02-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311687#comment-14311687
 ] 

Kai Zheng commented on HADOOP-11563:


Hmm, I'm not sure. It's hard to say CHANGES.txt is code or not. If you'd like 
to search or just access the following URL, you can get many issues for such 
things. [CHANGES.txt related issues | 
https://issues.apache.org/jira/browse/HADOOP-3266?jql=project%20in%20%28HADOOP%2C%20HDFS%29%20AND%20text%20~%20CHANGES.txt]
bq.You can change the CHANGES.txt and commit log directly through git.
I thought it also works for me. I will get it done directly. Thanks.

> Add the missed entry for CHANGES.txt
> 
>
> Key: HADOOP-11563
> URL: https://issues.apache.org/jira/browse/HADOOP-11563
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: HDFS-EC
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Trivial
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11563-v1.patch
>
>
> When committing HADOOP-11541, it forgot to update the 
> hadoop-common/CHANGES-HDFS-EC-7285.txt file. This is to add the missed entry. 
> Thanks [~hitliuyi] for pointing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11563) Add the missed entry for CHANGES.txt

2015-02-08 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11563:

Resolution: Invalid
Status: Resolved  (was: Patch Available)

> Add the missed entry for CHANGES.txt
> 
>
> Key: HADOOP-11563
> URL: https://issues.apache.org/jira/browse/HADOOP-11563
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: HDFS-EC
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Trivial
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11563-v1.patch
>
>
> When committing HADOOP-11541, it forgot to update the 
> hadoop-common/CHANGES-HDFS-EC-7285.txt file. This is to add the missed entry. 
> Thanks [~hitliuyi] for pointing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11542) Raw Reed-Solomon coder in pure Java

2015-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311661#comment-14311661
 ] 

Hadoop QA commented on HADOOP-11542:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12697330/HADOOP-11542-v3.patch
  against trunk revision 1382ae5.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5631//console

This message is automatically generated.

> Raw Reed-Solomon coder in pure Java
> ---
>
> Key: HADOOP-11542
> URL: https://issues.apache.org/jira/browse/HADOOP-11542
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-EC
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11542-v1.patch, HADOOP-11542-v2.patch, 
> HADOOP-11542-v3.patch
>
>
> This will implement RS coder by porting existing codes in HDFS-RAID in the 
> new codec and coder framework, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11520) Clean incomplete multi-part uploads in S3A tests

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311652#comment-14311652
 ] 

Hudson commented on HADOOP-11520:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #99 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/99/])


> Clean incomplete multi-part uploads in S3A tests
> 
>
> Key: HADOOP-11520
> URL: https://issues.apache.org/jira/browse/HADOOP-11520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11520.001.patch
>
>
> As proposed in HADOOP-11488. This patch activates the purging functionality 
> of s3a at the start of each test. This cleans up any in-progress multi-part 
> uploads in the test bucket, preventing unknowing users from eternally paying 
> Amazon for the space of the already uploaded parts of previous tests that 
> failed during a multi-part upload. 
> People who have run the s3a tests should run a single test (evidently after 
> this patch is applied) against all their testbuckets (or manually abort 
> multipart).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11535) TableMapping related tests failed due to 'successful' resolving of invalid test hostname

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311657#comment-14311657
 ] 

Hudson commented on HADOOP-11535:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #99 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/99/])


> TableMapping related tests failed due to 'successful' resolving of invalid 
> test hostname
> 
>
> Key: HADOOP-11535
> URL: https://issues.apache.org/jira/browse/HADOOP-11535
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-11535-v1.patch
>
>
> When mvn test in my environment, it reported the following.
> {noformat}
> Failed tests: 
>   TestTableMapping.testClearingCachedMappings:144 expected: but 
> was:
>   TestTableMapping.testTableCaching:79 expected: but 
> was:
>   TestTableMapping.testResolve:56 expected: but 
> was:
> {noformat}
> It's caused by the good resolving for the 'bad test' hostname 'a.b.c' as 
> follows.
> {noformat}
> [drankye@zkdesk hadoop-common-project]$ ping a.b.c
> PING a.b.c (220.250.64.228) 56(84) bytes of data.
> {noformat}
> I understand it may happen in just my local environment, and document this 
> just in case others also meet this. We may use even worse hostname than 
> 'a.b.c' to avoid such situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11520) Clean incomplete multi-part uploads in S3A tests

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311638#comment-14311638
 ] 

Hudson commented on HADOOP-11520:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2049 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2049/])


> Clean incomplete multi-part uploads in S3A tests
> 
>
> Key: HADOOP-11520
> URL: https://issues.apache.org/jira/browse/HADOOP-11520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11520.001.patch
>
>
> As proposed in HADOOP-11488. This patch activates the purging functionality 
> of s3a at the start of each test. This cleans up any in-progress multi-part 
> uploads in the test bucket, preventing unknowing users from eternally paying 
> Amazon for the space of the already uploaded parts of previous tests that 
> failed during a multi-part upload. 
> People who have run the s3a tests should run a single test (evidently after 
> this patch is applied) against all their testbuckets (or manually abort 
> multipart).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11563) Add the missed entry for CHANGES.txt

2015-02-08 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311662#comment-14311662
 ] 

Yi Liu commented on HADOOP-11563:
-

Hi Kai, no need for a separate JIRA to change CHANGES.txt, it's not code.
You can change the CHANGES.txt and commit log directly through git.

> Add the missed entry for CHANGES.txt
> 
>
> Key: HADOOP-11563
> URL: https://issues.apache.org/jira/browse/HADOOP-11563
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: HDFS-EC
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Trivial
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11563-v1.patch
>
>
> When committing HADOOP-11541, it forgot to update the 
> hadoop-common/CHANGES-HDFS-EC-7285.txt file. This is to add the missed entry. 
> Thanks [~hitliuyi] for pointing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311649#comment-14311649
 ] 

Hudson commented on HADOOP-11485:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #99 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/99/])


> Pluggable shell integration
> ---
>
> Key: HADOOP-11485
> URL: https://issues.apache.org/jira/browse/HADOOP-11485
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: scripts, shell
> Fix For: 3.0.0
>
> Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
> HADOOP-11485-02.patch, HADOOP-11485-03.patch, HADOOP-11485-04.patch
>
>
> It would be useful to provide a way for core and non-core Hadoop components 
> to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
> MapReduce, and YARN shell functions out of hadoop-functions.sh.  
> Additionally, it should let 3rd parties such as HBase influence things like 
> classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311635#comment-14311635
 ] 

Hudson commented on HADOOP-11485:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2049 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2049/])


> Pluggable shell integration
> ---
>
> Key: HADOOP-11485
> URL: https://issues.apache.org/jira/browse/HADOOP-11485
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: scripts, shell
> Fix For: 3.0.0
>
> Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
> HADOOP-11485-02.patch, HADOOP-11485-03.patch, HADOOP-11485-04.patch
>
>
> It would be useful to provide a way for core and non-core Hadoop components 
> to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
> MapReduce, and YARN shell functions out of hadoop-functions.sh.  
> Additionally, it should let 3rd parties such as HBase influence things like 
> classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11535) TableMapping related tests failed due to 'successful' resolving of invalid test hostname

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311643#comment-14311643
 ] 

Hudson commented on HADOOP-11535:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2049 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2049/])


> TableMapping related tests failed due to 'successful' resolving of invalid 
> test hostname
> 
>
> Key: HADOOP-11535
> URL: https://issues.apache.org/jira/browse/HADOOP-11535
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-11535-v1.patch
>
>
> When mvn test in my environment, it reported the following.
> {noformat}
> Failed tests: 
>   TestTableMapping.testClearingCachedMappings:144 expected: but 
> was:
>   TestTableMapping.testTableCaching:79 expected: but 
> was:
>   TestTableMapping.testResolve:56 expected: but 
> was:
> {noformat}
> It's caused by the good resolving for the 'bad test' hostname 'a.b.c' as 
> follows.
> {noformat}
> [drankye@zkdesk hadoop-common-project]$ ping a.b.c
> PING a.b.c (220.250.64.228) 56(84) bytes of data.
> {noformat}
> I understand it may happen in just my local environment, and document this 
> just in case others also meet this. We may use even worse hostname than 
> 'a.b.c' to avoid such situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11191) NativeAzureFileSystem#close() should be synchronized

2015-02-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-11191.
-
Resolution: Later

> NativeAzureFileSystem#close() should be synchronized
> 
>
> Key: HADOOP-11191
> URL: https://issues.apache.org/jira/browse/HADOOP-11191
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> public void close() throws IOException {
>   in.close();
>   closed = true;
> }
> {code}
> The other methods, such as seek(), are synchronized.
> close() should be as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11520) Clean incomplete multi-part uploads in S3A tests

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311570#comment-14311570
 ] 

Hudson commented on HADOOP-11520:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #95 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/95/])


> Clean incomplete multi-part uploads in S3A tests
> 
>
> Key: HADOOP-11520
> URL: https://issues.apache.org/jira/browse/HADOOP-11520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11520.001.patch
>
>
> As proposed in HADOOP-11488. This patch activates the purging functionality 
> of s3a at the start of each test. This cleans up any in-progress multi-part 
> uploads in the test bucket, preventing unknowing users from eternally paying 
> Amazon for the space of the already uploaded parts of previous tests that 
> failed during a multi-part upload. 
> People who have run the s3a tests should run a single test (evidently after 
> this patch is applied) against all their testbuckets (or manually abort 
> multipart).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311568#comment-14311568
 ] 

Hudson commented on HADOOP-11485:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #95 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/95/])


> Pluggable shell integration
> ---
>
> Key: HADOOP-11485
> URL: https://issues.apache.org/jira/browse/HADOOP-11485
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: scripts, shell
> Fix For: 3.0.0
>
> Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
> HADOOP-11485-02.patch, HADOOP-11485-03.patch, HADOOP-11485-04.patch
>
>
> It would be useful to provide a way for core and non-core Hadoop components 
> to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
> MapReduce, and YARN shell functions out of hadoop-functions.sh.  
> Additionally, it should let 3rd parties such as HBase influence things like 
> classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11535) TableMapping related tests failed due to 'successful' resolving of invalid test hostname

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311575#comment-14311575
 ] 

Hudson commented on HADOOP-11535:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #95 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/95/])


> TableMapping related tests failed due to 'successful' resolving of invalid 
> test hostname
> 
>
> Key: HADOOP-11535
> URL: https://issues.apache.org/jira/browse/HADOOP-11535
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-11535-v1.patch
>
>
> When mvn test in my environment, it reported the following.
> {noformat}
> Failed tests: 
>   TestTableMapping.testClearingCachedMappings:144 expected: but 
> was:
>   TestTableMapping.testTableCaching:79 expected: but 
> was:
>   TestTableMapping.testResolve:56 expected: but 
> was:
> {noformat}
> It's caused by the good resolving for the 'bad test' hostname 'a.b.c' as 
> follows.
> {noformat}
> [drankye@zkdesk hadoop-common-project]$ ping a.b.c
> PING a.b.c (220.250.64.228) 56(84) bytes of data.
> {noformat}
> I understand it may happen in just my local environment, and document this 
> just in case others also meet this. We may use even worse hostname than 
> 'a.b.c' to avoid such situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311536#comment-14311536
 ] 

Hudson commented on HADOOP-11485:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2030 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2030/])


> Pluggable shell integration
> ---
>
> Key: HADOOP-11485
> URL: https://issues.apache.org/jira/browse/HADOOP-11485
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: scripts, shell
> Fix For: 3.0.0
>
> Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
> HADOOP-11485-02.patch, HADOOP-11485-03.patch, HADOOP-11485-04.patch
>
>
> It would be useful to provide a way for core and non-core Hadoop components 
> to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
> MapReduce, and YARN shell functions out of hadoop-functions.sh.  
> Additionally, it should let 3rd parties such as HBase influence things like 
> classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11535) TableMapping related tests failed due to 'successful' resolving of invalid test hostname

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311543#comment-14311543
 ] 

Hudson commented on HADOOP-11535:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2030 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2030/])


> TableMapping related tests failed due to 'successful' resolving of invalid 
> test hostname
> 
>
> Key: HADOOP-11535
> URL: https://issues.apache.org/jira/browse/HADOOP-11535
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-11535-v1.patch
>
>
> When mvn test in my environment, it reported the following.
> {noformat}
> Failed tests: 
>   TestTableMapping.testClearingCachedMappings:144 expected: but 
> was:
>   TestTableMapping.testTableCaching:79 expected: but 
> was:
>   TestTableMapping.testResolve:56 expected: but 
> was:
> {noformat}
> It's caused by the good resolving for the 'bad test' hostname 'a.b.c' as 
> follows.
> {noformat}
> [drankye@zkdesk hadoop-common-project]$ ping a.b.c
> PING a.b.c (220.250.64.228) 56(84) bytes of data.
> {noformat}
> I understand it may happen in just my local environment, and document this 
> just in case others also meet this. We may use even worse hostname than 
> 'a.b.c' to avoid such situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11520) Clean incomplete multi-part uploads in S3A tests

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311538#comment-14311538
 ] 

Hudson commented on HADOOP-11520:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2030 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2030/])


> Clean incomplete multi-part uploads in S3A tests
> 
>
> Key: HADOOP-11520
> URL: https://issues.apache.org/jira/browse/HADOOP-11520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11520.001.patch
>
>
> As proposed in HADOOP-11488. This patch activates the purging functionality 
> of s3a at the start of each test. This cleans up any in-progress multi-part 
> uploads in the test bucket, preventing unknowing users from eternally paying 
> Amazon for the space of the already uploaded parts of previous tests that 
> failed during a multi-part upload. 
> People who have run the s3a tests should run a single test (evidently after 
> this patch is applied) against all their testbuckets (or manually abort 
> multipart).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8934) Shell command ls should include sort options

2015-02-08 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-8934:
---
Status: Patch Available  (was: Open)

> Shell command ls should include sort options
> 
>
> Key: HADOOP-8934
> URL: https://issues.apache.org/jira/browse/HADOOP-8934
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch
>
>
> The shell command ls should include options to sort the output similar to the 
> unix ls command.  The following options seem appropriate:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8934) Shell command ls should include sort options

2015-02-08 Thread Jonathan Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Allen updated HADOOP-8934:
---
Attachment: HADOOP-8934.patch

Updated for latest trunk.

> Shell command ls should include sort options
> 
>
> Key: HADOOP-8934
> URL: https://issues.apache.org/jira/browse/HADOOP-8934
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Minor
> Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, 
> HADOOP-8934.patch
>
>
> The shell command ls should include options to sort the output similar to the 
> unix ls command.  The following options seem appropriate:
> -t : sort by modification time
> -S : sort by file size
> -r : reverse the sort order
> -u : use access time rather than modification time for sort and display



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9600) In Windows: Hadoop fails to run when JAVA_HOME has spaces in it

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9600:
-
Status: Open  (was: Patch Available)

Cancelling patch since it no longer applies.

> In Windows: Hadoop fails to run when JAVA_HOME has spaces in it
> ---
>
> Key: HADOOP-9600
> URL: https://issues.apache.org/jira/browse/HADOOP-9600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
> Environment: Windows
>Reporter: Mostafa Elhemali
>Assignee: Mostafa Elhemali
> Attachments: HADOOP-9600.2.patch, HADOOP-9600.3.patch, 
> HADOOP-9600.4.patch, HADOOP-9600.5.patch, HADOOP-9600.6.patch, 
> HADOOP-9600.patch
>
>
> hadoop-config.cmd misbehaves when JAVA_HOME has spaces in it (e.g. if Java is 
> in c:\Program Files).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9600) In Windows: Hadoop fails to run when JAVA_HOME has spaces in it

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9600:
-
Component/s: scripts

> In Windows: Hadoop fails to run when JAVA_HOME has spaces in it
> ---
>
> Key: HADOOP-9600
> URL: https://issues.apache.org/jira/browse/HADOOP-9600
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
> Environment: Windows
>Reporter: Mostafa Elhemali
>Assignee: Mostafa Elhemali
> Attachments: HADOOP-9600.2.patch, HADOOP-9600.3.patch, 
> HADOOP-9600.4.patch, HADOOP-9600.5.patch, HADOOP-9600.6.patch, 
> HADOOP-9600.patch
>
>
> hadoop-config.cmd misbehaves when JAVA_HOME has spaces in it (e.g. if Java is 
> in c:\Program Files).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7893) Sort out tarball conf directories

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7893.
--
Resolution: Won't Fix

Closing as Won't Fix since none of it exists in trunk anymore.

> Sort out tarball conf directories
> -
>
> Key: HADOOP-7893
> URL: https://issues.apache.org/jira/browse/HADOOP-7893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>Assignee: Eric Yang
>
> The conf directory situation in the tarball (generated by mvn pacakge -Dtar) 
> is a mess. The top-level conf directory just contains mr2 conf, there are two 
> other incomplete conf dirs:
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ ls conf/
> slaves  yarn-env.sh  yarn-site.xml
> hadoop-0.24.0-SNAPSHOT $ find . -name conf
> ./conf
> ./share/hadoop/hdfs/templates/conf
> ./share/hadoop/common/templates/conf
> {noformat}
> yet there are 4 hdfs-site.xml files:
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ find . -name hdfs-site.xml
> ./etc/hadoop/hdfs-site.xml
> ./share/hadoop/hdfs/templates/conf/hdfs-site.xml
> ./share/hadoop/hdfs/templates/hdfs-site.xml
> ./share/hadoop/common/templates/conf/hdfs-site.xml
> {noformat}
> And it looks like ./share/hadoop/common/templates/conf contains the old MR1 
> style conf (eg mapred-site.xml).
> We should generate a tarball with a single conf directory that just has 
> common, hdfs and mr2 confs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11554:
--
Status: Open  (was: Patch Available)

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11554) Expose HadoopKerberosName as a hadoop subcommand

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11554:
--
Status: Patch Available  (was: Open)

> Expose HadoopKerberosName as a hadoop subcommand
> 
>
> Key: HADOOP-11554
> URL: https://issues.apache.org/jira/browse/HADOOP-11554
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11554-01.patch, HADOOP-11554.patch
>
>
> HadoopKerberosName has been around as a "secret hack" for quite a while.  We 
> should clean up the output and make it official by exposing it via the hadoop 
> command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10935) Cleanup HadoopKerberosName for public consumption

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-10935.
---
Resolution: Duplicate

> Cleanup HadoopKerberosName for public consumption
> -
>
> Key: HADOOP-10935
> URL: https://issues.apache.org/jira/browse/HADOOP-10935
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Allen Wittenauer
>Priority: Minor
>  Labels: newbie
>
> It would be good if we pulled HadoopKerberosName out of the closet and into 
> the light so that others may bask in its glorious usefulness.
> Missing:
> * Documentation
> * Shell short cut
> * CLI help when run without arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-3438) NPE if job tracker started and system property hadoop.log.dir is not set

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-3438.
--
Resolution: Won't Fix

> NPE if job tracker started and system property hadoop.log.dir is not set
> 
>
> Key: HADOOP-3438
> URL: https://issues.apache.org/jira/browse/HADOOP-3438
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.18.0
> Environment: amd64 ubuntu, jrockit 1.6
>Reporter: Steve Loughran
>  Labels: newbie
>
> This is a regression. If the system property "hadoop.log.dir" is not set, the 
> job tracker NPEs rather than starts up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7927) Can't build packages 205+ on OSX

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7927.
--
Resolution: Won't Fix

Long since fixed issues in newer versions of Hadoop.

> Can't build packages 205+ on OSX
> 
>
> Key: HADOOP-7927
> URL: https://issues.apache.org/jira/browse/HADOOP-7927
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.205.0, 1.0.0
>Reporter: Jakob Homan
>
> Currently the ant build script tries to reference the native directories, 
> which are not built on OSX, breaking the build:
> {noformat}bin-package:
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/bin
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/etc/hadoop
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/lib
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/libexec
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/sbin
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/contrib
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/webapps
> [mkdir] Created dir: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/templates/conf
>  [copy] Copying 11 files to 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/templates/conf
>  [copy] Copying 39 files to 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/lib
>  [copy] Copying 15 files to 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/share/hadoop/lib
> BUILD FAILED
> /Users/jhoman/repos/hadoop-common/build.xml:1611: 
> /Users/jhoman/repos/hadoop-common/build/hadoop-0.20.205.1/native does not 
> exist.
> {noformat}
> Once one fixes this, one discovers the build is also trying to build the 
> linux task controller, regardless of whether not the native flag is set, 
> which also fails.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8448) Java options being duplicated several times

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8448.
--
Resolution: Not a Problem

Fixed in trunk.

> Java options being duplicated several times
> ---
>
> Key: HADOOP-8448
> URL: https://issues.apache.org/jira/browse/HADOOP-8448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, scripts
>Affects Versions: 1.0.2
> Environment: VirtualBox 4.1.14 r77440
> Linux slack 2.6.37.6 #3 SMP Sat Apr 9 22:49:32 CDT 2011 x86_64 Intel(R) 
> Core(TM)2 Quad CPUQ8300  @ 2.50GHz GenuineIntel GNU/Linux 
> java version "1.7.0_04"
> Java(TM) SE Runtime Environment (build 1.7.0_04-b20)
> Java HotSpot(TM) 64-Bit Server VM (build 23.0-b21, mixed mode)
> Hadoop 1.0.2
> Subversion 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 
> 1304954 Compiled by hortonfo on Sat Mar 24 23:58:21 UTC 2012
> From source with checksum c198b04303cfa626a38e13154d2765a9
> Hadoop is running under Pseudo-Distributed mode according to the 
> http://hadoop.apache.org/common/docs/r1.0.3/single_node_setup.html#PseudoDistributed
>Reporter: Evgeny Rusak
>
> After adding the additional java option to the HADOOP_JOBTRACKER_OPTS like 
> the following
>  export HADOOP_JOBTRACKER_OPTS="$HADOOP_JOBTRACKER_OPTS -Dxxx=yyy"
> and starting the hadoop instance with start-all.sh, the option added is being 
> attached several times according to the command
>  ps ax | grep jobtracker 
> which prints 
> .
> 29824 ?Sl22:29 home/hduser/apps/jdk/jdk1.7.0_04/bin/java  
>-Dproc_jobtracker -XX:MaxPermSize=256m 
> -Xmx600m -Dxxx=yyy -Dxxx=yyy
> -Dxxx=yyy -Dxxx=yyy -Dxxx=yyy 
> -Dhadoop.log.dir=/home/hduser/apps/hadoop/hadoop-1.0.2/libexec/../logs
> ..
>  The aforementioned unexpected behaviour causes the severe issue while 
> specifying "-agentpath:" option, because several duplicating agents being 
> considered as different agents are trying to be instantiated several times at 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8448) Java options being duplicated several times

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8448:
-
Component/s: scripts

> Java options being duplicated several times
> ---
>
> Key: HADOOP-8448
> URL: https://issues.apache.org/jira/browse/HADOOP-8448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, scripts
>Affects Versions: 1.0.2
> Environment: VirtualBox 4.1.14 r77440
> Linux slack 2.6.37.6 #3 SMP Sat Apr 9 22:49:32 CDT 2011 x86_64 Intel(R) 
> Core(TM)2 Quad CPUQ8300  @ 2.50GHz GenuineIntel GNU/Linux 
> java version "1.7.0_04"
> Java(TM) SE Runtime Environment (build 1.7.0_04-b20)
> Java HotSpot(TM) 64-Bit Server VM (build 23.0-b21, mixed mode)
> Hadoop 1.0.2
> Subversion 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 
> 1304954 Compiled by hortonfo on Sat Mar 24 23:58:21 UTC 2012
> From source with checksum c198b04303cfa626a38e13154d2765a9
> Hadoop is running under Pseudo-Distributed mode according to the 
> http://hadoop.apache.org/common/docs/r1.0.3/single_node_setup.html#PseudoDistributed
>Reporter: Evgeny Rusak
>
> After adding the additional java option to the HADOOP_JOBTRACKER_OPTS like 
> the following
>  export HADOOP_JOBTRACKER_OPTS="$HADOOP_JOBTRACKER_OPTS -Dxxx=yyy"
> and starting the hadoop instance with start-all.sh, the option added is being 
> attached several times according to the command
>  ps ax | grep jobtracker 
> which prints 
> .
> 29824 ?Sl22:29 home/hduser/apps/jdk/jdk1.7.0_04/bin/java  
>-Dproc_jobtracker -XX:MaxPermSize=256m 
> -Xmx600m -Dxxx=yyy -Dxxx=yyy
> -Dxxx=yyy -Dxxx=yyy -Dxxx=yyy 
> -Dhadoop.log.dir=/home/hduser/apps/hadoop/hadoop-1.0.2/libexec/../logs
> ..
>  The aforementioned unexpected behaviour causes the severe issue while 
> specifying "-agentpath:" option, because several duplicating agents being 
> considered as different agents are trying to be instantiated several times at 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7895) HADOOP_LOG_DIR has to be set explicitly when running from the tarball

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7895.
--
Resolution: Not a Problem

Fixed in trunk.

> HADOOP_LOG_DIR has to be set explicitly when running from the tarball
> -
>
> Key: HADOOP-7895
> URL: https://issues.apache.org/jira/browse/HADOOP-7895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> When running bin and sbin commands from the tarball if HADOOP_LOG_DIR is not 
> explicitly set in hadoop-env.sh it doesn't use HADOOP_HOME/logs by default 
> like it used to, instead picks a wrong dir:
> {noformat}
> localhost: mkdir: cannot create directory `/eli': Permission denied
> localhost: chown: cannot access `/eli/eli': No such file or directory
> {noformat}
> We should have it default to HADOOP_HOME/logs or at least fail with a message 
> if the dir doesn't exist, the env var isn't set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8033) HADOOP_JAVA_PLATFORM_OPS is no longer respected

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8033.
--
Resolution: Won't Fix

> HADOOP_JAVA_PLATFORM_OPS is no longer respected
> ---
>
> Key: HADOOP-8033
> URL: https://issues.apache.org/jira/browse/HADOOP-8033
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 0.22.0, 0.23.0
>Reporter: Eli Collins
>Priority: Minor
>
> HADOOP-6284 introduced HADOOP_JAVA_PLATFORM_OPS and it's in branch-1, however 
> it's not in trunk or 22, 23. It's referenced in hadoop-env.sh (commented out) 
> but not actually used anywhere, not sure when it was removed from bin/hadoop. 
> Perhaps the intention was to just use HADOOP_OPTS?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11185) There should be a way to disable a kill -9 during stop

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11185:
--
Component/s: scripts

> There should be a way to disable a kill -9 during stop
> --
>
> Key: HADOOP-11185
> URL: https://issues.apache.org/jira/browse/HADOOP-11185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Ravi Prakash
>
> eg. hadoop-common-project/hadoop-common/bin/src/main/bin/hadoop-functions.sh 
> calls kill -9 after some time. This might not be the best thing to do for 
> some processes (if HA is not enabled) . There should be ability to disable 
> this kill -9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10797) Hardcoded path to "bash" is not portable

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10797:
--
Component/s: scripts

> Hardcoded path to "bash" is not portable
> 
>
> Key: HADOOP-10797
> URL: https://issues.apache.org/jira/browse/HADOOP-10797
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.4.1
>Reporter: Dmitry Sivachenko
> Attachments: bash.patch
>
>
> Most of shell scripts use shebang ling in the following format:
> #!/usr/bin/env bash
> But some scripts contain hardcoded "/bin/bash" which is not portable.
> Please use #!/usr/bin/env bash instead for portability.
> PS: it would be much better to switch to standard Bourne Shell /bin/sh, do 
> these scripts really need bash?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7894) bin and sbin commands don't use JAVA_HOME when run from the tarball

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7894.
--
Resolution: Not a Problem

Fixed in trunk.

> bin and sbin commands don't use  JAVA_HOME when run from the tarball 
> -
>
> Key: HADOOP-7894
> URL: https://issues.apache.org/jira/browse/HADOOP-7894
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Eli Collins
>
> When running eg ./sbin/start-dfs.sh from a tarball the scripts complain 
> JAVA_HOME is not set and could not be found even if the env var is set.
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ echo $JAVA_HOME
> /home/eli/toolchain/jdk1.6.0_24-x64
> hadoop-0.24.0-SNAPSHOT $ ./sbin/start-dfs.sh 
> log4j:ERROR Could not find value for key log4j.appender.NullAppender
> log4j:ERROR Could not instantiate appender named "NullAppender".
> Starting namenodes on [localhost]
> localhost: Error: JAVA_HOME is not set and could not be found.
> {noformat}
> I have to explicitly set this via hadoop-env.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8072) bin/hadoop leaks pids when running a non-detached datanode via jsvc

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8072:
-
Component/s: scripts

> bin/hadoop leaks pids when running a non-detached datanode via jsvc
> ---
>
> Key: HADOOP-8072
> URL: https://issues.apache.org/jira/browse/HADOOP-8072
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.205.0
> Environment: Centos 5 or 6, but affects all platforms
>Reporter: Peter Linnell
> Attachments: fix-leaking-pid-s.patch
>
>
> See: https://issues.cloudera.org/browse/DISTRO-53  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8072) bin/hadoop leaks pids when running a non-detached datanode via jsvc

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8072.
--
Resolution: Not a Problem

Long since fixed. Closing.

> bin/hadoop leaks pids when running a non-detached datanode via jsvc
> ---
>
> Key: HADOOP-8072
> URL: https://issues.apache.org/jira/browse/HADOOP-8072
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.205.0
> Environment: Centos 5 or 6, but affects all platforms
>Reporter: Peter Linnell
> Attachments: fix-leaking-pid-s.patch
>
>
> See: https://issues.cloudera.org/browse/DISTRO-53  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8650) /bin/hadoop-daemon.sh to add "-f " arg for forced shutdowns

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8650.
--
Resolution: Won't Fix

Fixed in trunk, 1.x is dead. 2.x might as well be.

> /bin/hadoop-daemon.sh to add "-f " arg for forced shutdowns 
> -
>
> Key: HADOOP-8650
> URL: https://issues.apache.org/jira/browse/HADOOP-8650
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 1.0.3, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
>
> Add a timeout for the daemon script to trigger a kill -9 if the clean 
> shutdown fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8650) /bin/hadoop-daemon.sh to add "-f " arg for forced shutdowns

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8650:
-
Component/s: scripts

> /bin/hadoop-daemon.sh to add "-f " arg for forced shutdowns 
> -
>
> Key: HADOOP-8650
> URL: https://issues.apache.org/jira/browse/HADOOP-8650
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 1.0.3, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
>
> Add a timeout for the daemon script to trigger a kill -9 if the clean 
> shutdown fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8103) Hadoop-bin commands for windows

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8103:
-
Resolution: Not a Problem
Status: Resolved  (was: Patch Available)

This issue was fixed at some point.

> Hadoop-bin commands for windows
> ---
>
> Key: HADOOP-8103
> URL: https://issues.apache.org/jira/browse/HADOOP-8103
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 1.1.0, 0.24.0
>Reporter: Sanjay Radia
> Attachments: windows-cmd-scripts.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9752) Latest Ubuntu (13.04) /bin/kill parameter for process group requires a 'double dash kill -0 -- -

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9752:
-
Component/s: (was: scripts)

> Latest Ubuntu (13.04)  /bin/kill parameter for process group requires a 
> 'double dash kill -0 -- -
> --
>
> Key: HADOOP-9752
> URL: https://issues.apache.org/jira/browse/HADOOP-9752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
> Attachments: HADOOP-9752v1.patch, HADOOP-9752v2.patch, 
> HADOOP-9752v3.patch, HADOOP-9752v4.patch, HADOOP-9752v4.patch
>
>
> This changed on Ubuntu 12.10 and later.  This prevents the kill command from 
> executing correctly in Shell.java.
> There is a bug filed in Ubuntu but there is not much activity. 
> https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/1077796



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9752) Latest Ubuntu (13.04) /bin/kill parameter for process group requires a 'double dash kill -0 -- -

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9752:
-
Component/s: scripts

> Latest Ubuntu (13.04)  /bin/kill parameter for process group requires a 
> 'double dash kill -0 -- -
> --
>
> Key: HADOOP-9752
> URL: https://issues.apache.org/jira/browse/HADOOP-9752
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts, util
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
> Attachments: HADOOP-9752v1.patch, HADOOP-9752v2.patch, 
> HADOOP-9752v3.patch, HADOOP-9752v4.patch, HADOOP-9752v4.patch
>
>
> This changed on Ubuntu 12.10 and later.  This prevents the kill command from 
> executing correctly in Shell.java.
> There is a bug filed in Ubuntu but there is not much activity. 
> https://bugs.launchpad.net/ubuntu/+source/coreutils/+bug/1077796



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11293) Factor OSType out from Shell

2015-02-08 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311453#comment-14311453
 ] 

Yongjun Zhang commented on HADOOP-11293:


Hi [~stev...@iseran.com],

You changed status from "patch Available" ->"Open" -> "Patch Available" twice 
recently and seems none of them triggerred jenkins test run.  We may need to 
bring this to infra folks' attention.

I uploaded the same patch twice  and both triggered test run. 

For the two failed test in yesterday's run, 
TestJobConf failure was reported as MAPREDUCE-6223.
Running TestHDFSCLI locally is successful.

Thanks.


> Factor OSType out from Shell
> 
>
> Key: HADOOP-11293
> URL: https://issues.apache.org/jira/browse/HADOOP-11293
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, util
>Affects Versions: 2.7.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11293.001.patch, HADOOP-11293.002.patch, 
> HADOOP-11293.003.patch, HADOOP-11293.004.patch, HADOOP-11293.005.patch, 
> HADOOP-11293.005.patch, HADOOP-11293.005.patch, HADOOP-11293.005.patch
>
>
> Currently the code that detects the OS type is located in Shell.java. Code 
> that need to check OS type refers to Shell, even if no other stuff of Shell 
> is needed. 
> I am proposing to refactor OSType out to  its own class, so to make the 
> OSType easier to access and the dependency cleaner.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7543) hadoop-config.sh is missing in HADOOP_COMMON_HOME/bin after mvn'ization

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7543.
--
Resolution: Not a Problem

> hadoop-config.sh is missing in HADOOP_COMMON_HOME/bin after mvn'ization
> ---
>
> Key: HADOOP-7543
> URL: https://issues.apache.org/jira/browse/HADOOP-7543
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Arun C Murthy
>
> hadoop-config.sh is missing in $HADOOP_COMMON_HOME/bin after mvn'ization, 
> it's only in $HADOOP_COMMON_HOME/libexec which breaks bin/hdfs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8494) bin/hadoop dfs -help tries to connect to NameNode instead of just printing help

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8494.
--
Resolution: Duplicate

> bin/hadoop dfs -help tries to connect to NameNode instead of just printing 
> help
> ---
>
> Key: HADOOP-8494
> URL: https://issues.apache.org/jira/browse/HADOOP-8494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.0.3
> Environment: ubuntu 12.04   hadoop-1.0.3
>Reporter: robin
>
> {code}
> szx@ubuntu1:/opt/hadoop$ bin/hadoop dfs -help
> 12/06/07 23:18:51 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 0 time(s).
> 12/06/07 23:18:52 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 1 time(s).
> 12/06/07 23:18:53 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 2 time(s).
> 12/06/07 23:18:54 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 3 time(s).
> 12/06/07 23:18:55 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 4 time(s).
> 12/06/07 23:18:56 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 5 time(s).
> 12/06/07 23:18:57 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 6 time(s).
> 12/06/07 23:18:58 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 7 time(s).
> 12/06/07 23:18:59 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 8 time(s).
> 12/06/07 23:19:00 INFO ipc.Client: Retrying connect to server: 
> ubuntu1/192.168.200.135:9000. Already tried 9 time(s).
> Bad connection to FS. command aborted. exception: Call to 
> ubuntu1/192.168.200.135:9000 failed on connection exception: 
> java.net.ConnectException: Connection refused
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11564) Fix Multithreaded correctness Warnings in BackupImage.java

2015-02-08 Thread Rakesh R (JIRA)
Rakesh R created HADOOP-11564:
-

 Summary: Fix Multithreaded correctness Warnings in BackupImage.java
 Key: HADOOP-11564
 URL: https://issues.apache.org/jira/browse/HADOOP-11564
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R


Inconsistent synchronization of 
org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem; locked 60% of 
time
{code}
Bug type IS2_INCONSISTENT_SYNC (click for details) 
In class org.apache.hadoop.hdfs.server.namenode.BackupImage
Field org.apache.hadoop.hdfs.server.namenode.BackupImage.namesystem
Synchronized 60% of the time
Unsynchronized access at BackupImage.java:[line 97]
Unsynchronized access at BackupImage.java:[line 261]
Synchronized access at BackupImage.java:[line 197]
Synchronized access at BackupImage.java:[line 212]
Synchronized access at BackupImage.java:[line 295]
{code}

https://builds.apache.org/job/PreCommit-HDFS-Build/9493//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html#Details



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8792) hadoop-daemon doesn't handle chown failures

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8792.
--
Resolution: Not a Problem

This was fixed by HADOOP-9902.  At this time, the hadoop shell code no longer 
executes a chown or do much with usernames other than use them for the names of 
logs. Closing as Not a Problem.

> hadoop-daemon doesn't handle chown failures
> ---
>
> Key: HADOOP-8792
> URL: https://issues.apache.org/jira/browse/HADOOP-8792
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
> Environment: Whirr deployment onto existing VM
>Reporter: Steve Loughran
>
> A whirr deployment of the JT failed; it looks like the hadoop user wasn't 
> there. This didn't get picked up by whirr (WHIRR-651) as the hadoop-daemon 
> script doesn't check the return value of its chown operation -this should be 
> converted into a failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10245) Hadoop command line always appends "-Xmx" option twice

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10245:
--
Resolution: Not a Problem
Status: Resolved  (was: Patch Available)

This has been fixed as part of HADOOP-9902. Closing as 'Not a problem'.

> Hadoop command line always appends "-Xmx" option twice
> --
>
> Key: HADOOP-10245
> URL: https://issues.apache.org/jira/browse/HADOOP-10245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin, scripts
>Affects Versions: 2.2.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-10245.patch
>
>
> The Hadoop command line scripts (hadoop.sh or hadoop.cmd) will call java with 
> "-Xmx" options twice. The impact is that any user defined HADOOP_HEAP_SIZE 
> env variable will take no effect because it is overwritten by the second 
> "-Xmx" option.
> For example, here is the java cmd generated for command "hadoop fs -ls /", 
> Notice that there are two "-Xmx" options: "-Xmx1000m" and "-Xmx512m" in the 
> command line:
> java -Xmx1000m  -Dhadoop.log.dir=C:\tmp\logs -Dhadoop.log.file=hadoop.log 
> -Dhadoop.root.logger=INFO,c
> onsole,DRFA -Xmx512m  -Dhadoop.security.logger=INFO,RFAS -classpath XXX 
> org.apache.hadoop.fs.FsShell -ls /
> Here is the root cause:
> The call flow is: hadoop.sh calls hadoop_config.sh, which in turn calls 
> hadoop-env.sh. 
> In hadoop.sh, the command line is generated by the following pseudo code:
> java $JAVA_HEAP_MAX $HADOOP_CLIENT_OPTS -classpath ...
> In hadoop-config.sh, $JAVA_HEAP_MAX is initialized as "-Xmx1000m" if user 
> didn't set $HADOOP_HEAP_SIZE env variable.
> In hadoop-env.sh, $HADOOP_CLIENT_OPTS is set as this:
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> To fix this problem, we should remove the "-Xmx512m" from HADOOP_CLIENT_OPTS. 
> If we really want to change the memory settings we need to use 
> $HADOOP_HEAP_SIZE env variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10245) Hadoop command line always appends "-Xmx" option twice

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10245:
--
Component/s: scripts

> Hadoop command line always appends "-Xmx" option twice
> --
>
> Key: HADOOP-10245
> URL: https://issues.apache.org/jira/browse/HADOOP-10245
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin, scripts
>Affects Versions: 2.2.0
>Reporter: shanyu zhao
>Assignee: shanyu zhao
> Attachments: HADOOP-10245.patch
>
>
> The Hadoop command line scripts (hadoop.sh or hadoop.cmd) will call java with 
> "-Xmx" options twice. The impact is that any user defined HADOOP_HEAP_SIZE 
> env variable will take no effect because it is overwritten by the second 
> "-Xmx" option.
> For example, here is the java cmd generated for command "hadoop fs -ls /", 
> Notice that there are two "-Xmx" options: "-Xmx1000m" and "-Xmx512m" in the 
> command line:
> java -Xmx1000m  -Dhadoop.log.dir=C:\tmp\logs -Dhadoop.log.file=hadoop.log 
> -Dhadoop.root.logger=INFO,c
> onsole,DRFA -Xmx512m  -Dhadoop.security.logger=INFO,RFAS -classpath XXX 
> org.apache.hadoop.fs.FsShell -ls /
> Here is the root cause:
> The call flow is: hadoop.sh calls hadoop_config.sh, which in turn calls 
> hadoop-env.sh. 
> In hadoop.sh, the command line is generated by the following pseudo code:
> java $JAVA_HEAP_MAX $HADOOP_CLIENT_OPTS -classpath ...
> In hadoop-config.sh, $JAVA_HEAP_MAX is initialized as "-Xmx1000m" if user 
> didn't set $HADOOP_HEAP_SIZE env variable.
> In hadoop-env.sh, $HADOOP_CLIENT_OPTS is set as this:
> export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> To fix this problem, we should remove the "-Xmx512m" from HADOOP_CLIENT_OPTS. 
> If we really want to change the memory settings we need to use 
> $HADOOP_HEAP_SIZE env variable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8938) add option to do better diags of startup configuration

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8938.
--
Resolution: Not a Problem

Lots of shell changes, and in particular HADOOP-11013, have been implemented to 
help here.  HADOOP-7947 is also in progress at present, which will help on the 
XML bits.

> add option to do better diags of startup configuration
> --
>
> Key: HADOOP-8938
> URL: https://issues.apache.org/jira/browse/HADOOP-8938
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: bin
>Affects Versions: 1.1.0, 2.0.2-alpha
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: hdiag.py
>
>
> HADOOP-8931 shows a symptom of a larger problem: we need better diagnostics 
> of what all the environment variables and settings going through the hadoop 
> scripts to find out why something isn't working. 
> Ideally some command line parameter to the scripts (or even: a new 
> environment variable) could trigger more display of critical parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9089) start-all.sh references a missing file start-mapred.sh

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-9089.
--
Resolution: Cannot Reproduce

This has been fixed at some point. Closing.

> start-all.sh references a missing file start-mapred.sh
> --
>
> Key: HADOOP-9089
> URL: https://issues.apache.org/jira/browse/HADOOP-9089
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 0.23.4
>Reporter: Yevgen Yampolskiy
>Priority: Minor
>
> start-mapred.sh is not included into 0.23.4 release. 
> I do not know if it is an intended change, however start-all.sh generates 
> message:
> This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
> Either message in start-all.sh needs to be changed, or start-all.sh should be 
> removed, or start-mapred.sh should be put back to the distribution



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9086) Enforce process singleton rules through an exclusive write lock on a file, not a pid file +kill -0,

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9086:
-
Component/s: scripts

> Enforce process singleton rules through an exclusive write lock on a file, 
> not a pid file +kill -0,
> ---
>
> Key: HADOOP-9086
> URL: https://issues.apache.org/jira/browse/HADOOP-9086
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, util
>Affects Versions: 1.1.1, 2.0.3-alpha
> Environment: Unix/Linux. 
>Reporter: Steve Loughran
>
> the {{hadoop-daemon.sh}} script (and other liveness monitors) probe the 
> existence of a daemon service by a {{kill -0}} of a process id picked up from 
> a pid file. 
> This is flawed
> # pid file locations may change with installations.
> # Linux and Unix recycle pids, leading to false positives -the scripts think 
> the process is running, when another process is.
> # doesn't work on windows.
> Having the processes acquire an exclusive write-lock on a known file would 
> delegate lock management and implicitly liveness to the OS itself. when the 
> process dies, the lock is relased (on Unixes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9085) start namenode failure,bacause pid of namenode pid file is other process pid or thread id before start namenode

2015-02-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9085:
-
Component/s: scripts

> start namenode failure,bacause pid of namenode pid file is other process pid 
> or thread id before start namenode
> ---
>
> Key: HADOOP-9085
> URL: https://issues.apache.org/jira/browse/HADOOP-9085
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin, scripts
>Affects Versions: 2.0.1-alpha, 2.0.3-alpha
> Environment: NA
>Reporter: liaowenrui
> Fix For: 2.0.1-alpha, 2.0.2-alpha, 2.7.0
>
>
> pid of namenode pid file is other process pid or thread id before start 
> namenode,start namenode will failure.because the pid of namenode pid file 
> will be checked use kill -0 command before start namenode in hadoop-daemo.sh 
> script.when pid of namenode pid file is other process pid or thread id,checkt 
> is use kil -0 command,and the kill -0 will return success.it means the 
> namenode is runing.in really,namenode is not runing.
> 2338 is dead namenode pid 
> 2305 is datanode pid
> cqn2:/tmp # kill -0 2338
> cqn2:/tmp # ps -wweLo pid,ppid,tid | grep 2338
>  2305 1  2338



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-5093) Configuration default resource handling needs to be able to remove default resources

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-5093.

   Resolution: Won't Fix
Fix Version/s: 2.7.0

real problem is that invalid resources are picked up without any checks. WONTFIX

> Configuration default resource handling needs to be able to remove default 
> resources 
> -
>
> Key: HADOOP-5093
> URL: https://issues.apache.org/jira/browse/HADOOP-5093
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 2.7.0
>
>
> There's a way to add default resources, but not remove them. This allows 
> someone to push an invalid resource into the default list, and for the rest 
> of the JVM's life, any Conf file loaded with quietMode set will fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-5731) IPC call can raise security exceptions when the remote node is running under a security manager

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-5731.

Resolution: Won't Fix

Nobody else is seeing/complaining about this. WONTFIX

> IPC call can raise security exceptions when the remote node is running under 
> a security manager
> ---
>
> Key: HADOOP-5731
> URL: https://issues.apache.org/jira/browse/HADOOP-5731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> I'm getting a security exception (java.lang.reflect.ReflectPermission 
> suppressAccessChecks) in RPC.Server.call(), when calling a datanode brought 
> up under a security manager, in method.setAccessible(true)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-9457) add an SCM-ignored XML filename to keep secrets in (auth-keys.xml?)

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9457.

   Resolution: Fixed
Fix Version/s: 2.6.0

the {{hadoop-aws}} and {{hadop-openstack}} modules both do this

> add an SCM-ignored XML filename to keep secrets in (auth-keys.xml?)
> ---
>
> Key: HADOOP-9457
> URL: https://issues.apache.org/jira/browse/HADOOP-9457
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 2.6.0
>
>
> to avoid accidentally checking in secrets, I keep auth keys for things like 
> AWS in file called {{auth-keys.xml}} alongside the 
> {{test/resources/core-site.xml}} file, then XInclude them. I also have a 
> global gitignore set up ignore files with that name.
> I propose having a standard name for XML files containing such secrets (we 
> could use auth-keys.xml or something else, and set up 
> {{hadoop-trunk/.gitignore}} and {{svn:ignore}} to ignore them. That way, 
> nobody else will check them in by accident



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10032) Backport hadoop-openstack to branch 1

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10032.
-
Resolution: Won't Fix

> Backport hadoop-openstack to branch 1
> -
>
> Key: HADOOP-10032
> URL: https://issues.apache.org/jira/browse/HADOOP-10032
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Affects Versions: 1.3.0
>Reporter: Steve Loughran
> Attachments: HADOOP-10032-1.patch
>
>
> Backport the hadoop-openstack module from trunk to branch-1.
> This will need a build.xml file to build it, ivy set up to add any extra 
> dependencies and testing. There's one extra {{FileSystem}} method in 2.x that 
> we can drop for branch-1.
> FWIW I've already built and tested hadoop-openstack against branch 1 by 
> editing the .pom file and having that module build against 1.  Before the 
> move from {{isDir()}} to {{isDirectory()}} it compiled and ran fine



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10036) RetryInvocationHandler should recognise that there is no point retrying to auth failures

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10036:

 Priority: Major  (was: Minor)
Affects Version/s: 2.6.0

> RetryInvocationHandler should recognise that there is no point retrying to 
> auth failures
> 
>
> Key: HADOOP-10036
> URL: https://issues.apache.org/jira/browse/HADOOP-10036
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.3.0, 2.6.0
>Reporter: Steve Loughran
>
> The {{RetryInvocationHandler}} tries to retry connections, so as to handle 
> transient failures. 
> However, auth failures aren't treated as special, so it spins even though the 
> operation will not succeed with the current configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10219) ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient requested shutdowns

2015-02-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10219:

 Priority: Major  (was: Minor)
Affects Version/s: 2.6.0

> ipc.Client.setupIOstreams() needs to check for ClientCache.stopClient 
> requested shutdowns 
> --
>
> Key: HADOOP-10219
> URL: https://issues.apache.org/jira/browse/HADOOP-10219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.0
>Reporter: Steve Loughran
>
> When {{ClientCache.stopClient()}} is called to stop the IPC client, if the 
> client
> is blocked spinning due to a connectivity problem, it does not exit until the 
> policy has timed out -so the stopClient() operation can hang for an extended 
> period of time.
> This can surface in the shutdown hook of FileSystem.cache.closeAll()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11520) Clean incomplete multi-part uploads in S3A tests

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311395#comment-14311395
 ] 

Hudson commented on HADOOP-11520:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/98/])


> Clean incomplete multi-part uploads in S3A tests
> 
>
> Key: HADOOP-11520
> URL: https://issues.apache.org/jira/browse/HADOOP-11520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11520.001.patch
>
>
> As proposed in HADOOP-11488. This patch activates the purging functionality 
> of s3a at the start of each test. This cleans up any in-progress multi-part 
> uploads in the test bucket, preventing unknowing users from eternally paying 
> Amazon for the space of the already uploaded parts of previous tests that 
> failed during a multi-part upload. 
> People who have run the s3a tests should run a single test (evidently after 
> this patch is applied) against all their testbuckets (or manually abort 
> multipart).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11535) TableMapping related tests failed due to 'successful' resolving of invalid test hostname

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311400#comment-14311400
 ] 

Hudson commented on HADOOP-11535:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/98/])


> TableMapping related tests failed due to 'successful' resolving of invalid 
> test hostname
> 
>
> Key: HADOOP-11535
> URL: https://issues.apache.org/jira/browse/HADOOP-11535
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-11535-v1.patch
>
>
> When mvn test in my environment, it reported the following.
> {noformat}
> Failed tests: 
>   TestTableMapping.testClearingCachedMappings:144 expected: but 
> was:
>   TestTableMapping.testTableCaching:79 expected: but 
> was:
>   TestTableMapping.testResolve:56 expected: but 
> was:
> {noformat}
> It's caused by the good resolving for the 'bad test' hostname 'a.b.c' as 
> follows.
> {noformat}
> [drankye@zkdesk hadoop-common-project]$ ping a.b.c
> PING a.b.c (220.250.64.228) 56(84) bytes of data.
> {noformat}
> I understand it may happen in just my local environment, and document this 
> just in case others also meet this. We may use even worse hostname than 
> 'a.b.c' to avoid such situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311393#comment-14311393
 ] 

Hudson commented on HADOOP-11485:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/98/])


> Pluggable shell integration
> ---
>
> Key: HADOOP-11485
> URL: https://issues.apache.org/jira/browse/HADOOP-11485
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: scripts, shell
> Fix For: 3.0.0
>
> Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
> HADOOP-11485-02.patch, HADOOP-11485-03.patch, HADOOP-11485-04.patch
>
>
> It would be useful to provide a way for core and non-core Hadoop components 
> to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
> MapReduce, and YARN shell functions out of hadoop-functions.sh.  
> Additionally, it should let 3rd parties such as HBase influence things like 
> classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11485) Pluggable shell integration

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311363#comment-14311363
 ] 

Hudson commented on HADOOP-11485:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #832 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/832/])


> Pluggable shell integration
> ---
>
> Key: HADOOP-11485
> URL: https://issues.apache.org/jira/browse/HADOOP-11485
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: scripts, shell
> Fix For: 3.0.0
>
> Attachments: HADOOP-11485-00.patch, HADOOP-11485-01.patch, 
> HADOOP-11485-02.patch, HADOOP-11485-03.patch, HADOOP-11485-04.patch
>
>
> It would be useful to provide a way for core and non-core Hadoop components 
> to plug into the shell infrastructure.  This would allow us to pull the HDFS, 
> MapReduce, and YARN shell functions out of hadoop-functions.sh.  
> Additionally, it should let 3rd parties such as HBase influence things like 
> classpaths at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11520) Clean incomplete multi-part uploads in S3A tests

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311365#comment-14311365
 ] 

Hudson commented on HADOOP-11520:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #832 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/832/])


> Clean incomplete multi-part uploads in S3A tests
> 
>
> Key: HADOOP-11520
> URL: https://issues.apache.org/jira/browse/HADOOP-11520
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.6.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11520.001.patch
>
>
> As proposed in HADOOP-11488. This patch activates the purging functionality 
> of s3a at the start of each test. This cleans up any in-progress multi-part 
> uploads in the test bucket, preventing unknowing users from eternally paying 
> Amazon for the space of the already uploaded parts of previous tests that 
> failed during a multi-part upload. 
> People who have run the s3a tests should run a single test (evidently after 
> this patch is applied) against all their testbuckets (or manually abort 
> multipart).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11535) TableMapping related tests failed due to 'successful' resolving of invalid test hostname

2015-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311370#comment-14311370
 ] 

Hudson commented on HADOOP-11535:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #832 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/832/])


> TableMapping related tests failed due to 'successful' resolving of invalid 
> test hostname
> 
>
> Key: HADOOP-11535
> URL: https://issues.apache.org/jira/browse/HADOOP-11535
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-11535-v1.patch
>
>
> When mvn test in my environment, it reported the following.
> {noformat}
> Failed tests: 
>   TestTableMapping.testClearingCachedMappings:144 expected: but 
> was:
>   TestTableMapping.testTableCaching:79 expected: but 
> was:
>   TestTableMapping.testResolve:56 expected: but 
> was:
> {noformat}
> It's caused by the good resolving for the 'bad test' hostname 'a.b.c' as 
> follows.
> {noformat}
> [drankye@zkdesk hadoop-common-project]$ ping a.b.c
> PING a.b.c (220.250.64.228) 56(84) bytes of data.
> {noformat}
> I understand it may happen in just my local environment, and document this 
> just in case others also meet this. We may use even worse hostname than 
> 'a.b.c' to avoid such situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11512) Use getTrimmedStrings when reading serialization keys

2015-02-08 Thread Ryan P (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan P updated HADOOP-11512:

Attachment: HADOOP-11512.patch

Alright, so hopefully I got it this time. I apologize for all the testing 
mishaps.


> Use getTrimmedStrings when reading serialization keys
> -
>
> Key: HADOOP-11512
> URL: https://issues.apache.org/jira/browse/HADOOP-11512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.6.0
>Reporter: Harsh J
>Assignee: Ryan P
>Priority: Minor
> Attachments: HADOOP-11512.patch, HADOOP-11512.patch, 
> HADOOP-11512.patch, HADOOP-11512.patch
>
>
> In the file 
> {{hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/serializer/SerializationFactory.java}},
>  we grab the IO_SERIALIZATIONS_KEY config as Configuration#getStrings(…) 
> which does not trim the input. This could cause confusing user issues if 
> someone manually overrides the key in the XML files/Configuration object 
> without using the dynamic approach.
> The call should instead use Configuration#getTrimmedStrings(…), so the 
> whitespace is trimmed before the class names are searched on the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11541) Raw XOR coder

2015-02-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311320#comment-14311320
 ] 

Kai Zheng commented on HADOOP-11541:


I provided a minor patch to save it in HADOOP-11563. [~hitliuyi] can you review 
it ? Thanks.

> Raw XOR coder
> -
>
> Key: HADOOP-11541
> URL: https://issues.apache.org/jira/browse/HADOOP-11541
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11541-v1.patch, HADOOP-11541-v2.patch
>
>
> This will implement XOR codes by porting the codes from HDFS-RAID. The coder 
> in the algorithm is needed by some high level codecs like LRC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11563) Add the missed entry for CHANGES.txt

2015-02-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11563:
---
Status: Patch Available  (was: Open)

> Add the missed entry for CHANGES.txt
> 
>
> Key: HADOOP-11563
> URL: https://issues.apache.org/jira/browse/HADOOP-11563
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: HDFS-EC
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Trivial
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11563-v1.patch
>
>
> When committing HADOOP-11541, it forgot to update the 
> hadoop-common/CHANGES-HDFS-EC-7285.txt file. This is to add the missed entry. 
> Thanks [~hitliuyi] for pointing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11563) Add the missed entry for CHANGES.txt

2015-02-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11563:
---
Attachment: HADOOP-11563-v1.patch

Uploaded a patch adding the entry.

> Add the missed entry for CHANGES.txt
> 
>
> Key: HADOOP-11563
> URL: https://issues.apache.org/jira/browse/HADOOP-11563
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: HDFS-EC
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Trivial
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11563-v1.patch
>
>
> When committing HADOOP-11541, it forgot to update the 
> hadoop-common/CHANGES-HDFS-EC-7285.txt file. This is to add the missed entry. 
> Thanks [~hitliuyi] for pointing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11563) Add the missed entry for CHANGES.txt

2015-02-08 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11563:
--

 Summary: Add the missed entry for CHANGES.txt
 Key: HADOOP-11563
 URL: https://issues.apache.org/jira/browse/HADOOP-11563
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: HDFS-EC
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Trivial
 Fix For: HDFS-EC


When committing HADOOP-11541, it forgot to update the 
hadoop-common/CHANGES-HDFS-EC-7285.txt file. This is to add the missed entry. 
Thanks [~hitliuyi] for pointing this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11542) Raw Reed-Solomon coder in pure Java

2015-02-08 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11542:
---
Attachment: HADOOP-11542-v3.patch

Updated the patch according to above review and discussion.

> Raw Reed-Solomon coder in pure Java
> ---
>
> Key: HADOOP-11542
> URL: https://issues.apache.org/jira/browse/HADOOP-11542
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-EC
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11542-v1.patch, HADOOP-11542-v2.patch, 
> HADOOP-11542-v3.patch
>
>
> This will implement RS coder by porting existing codes in HDFS-RAID in the 
> new codec and coder framework, which could be useful in case native support 
> isn't available or convenient in some environments or platforms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11541) Raw XOR coder

2015-02-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14311299#comment-14311299
 ] 

Kai Zheng commented on HADOOP-11541:


Thanks [~hitliuyi] for looking at this.
bq.You should also write the contribution name
Yes I should have followed the convention, though I did notice some exceptions.
bq.you should change the corresponding CHANGES.txt
Yes we have the {{CHANGES.txt}}, sorry I forgot updating it. Do we need to fire 
a JIRA to save this ?

> Raw XOR coder
> -
>
> Key: HADOOP-11541
> URL: https://issues.apache.org/jira/browse/HADOOP-11541
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-EC
>
> Attachments: HADOOP-11541-v1.patch, HADOOP-11541-v2.patch
>
>
> This will implement XOR codes by porting the codes from HDFS-RAID. The coder 
> in the algorithm is needed by some high level codecs like LRC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)