[jira] [Updated] (HADOOP-8765) LocalDirAllocator.ifExists API is broken and unused

2012-09-05 Thread Hemanth Yamijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Yamijala updated HADOOP-8765:
-

Attachment: HADOOP-8765.patch

Attached patch removes the ifExists API and AllocatorPerContext.ifExists API as 
well. Could someone please review ?

 LocalDirAllocator.ifExists API is broken and unused
 ---

 Key: HADOOP-8765
 URL: https://issues.apache.org/jira/browse/HADOOP-8765
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8765.patch


 LocalDirAllocator.ifExists calls AllocatorPerContext.ifExists, which accesses 
 the localDirsPath variable that is uninitialised. Hence, any code that uses 
 this API is likely to fail with a NullPointerException.
 However, this API is currently not used anywhere else in trunk. The earlier 
 usage was in IsolationRunner, that has since been removed via MAPREDUCE-2606.
 Hence, we could remove this API for trunk, and if required, fix it for the 
 1.x branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8758) Support for pluggable token implementations

2012-09-05 Thread Kan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448518#comment-13448518
 ] 

Kan Zhang commented on HADOOP-8758:
---

Thanks all for your comments.

[~eric14], one use case is exactly the Gateway model you mentioned. Suppose a 
cluster can be accessed only through a gateway machine and the gateway is 
configured to authenticate its external clients using some other method (e.g., 
LDAP). We want to turn on Hadoop security within the cluster to support 
multi-tenancy, but we don't want to invest in Kerberos. One thing we can do is 
pre-generate and configure a shared key between Gateway and NN during install 
and use it to set up a secure connection between Gateway and NN in place of 
Kerberos. With pluggable interface to support multiple types of delegation 
tokens, such shared keys can be implemented in the form of a special kind of 
tokens and only connections authenticated using these special tokens can be 
used to fetch delegation tokens. In other words, these special tokens are used 
in place of Kerberos tickets to bootstrap security. The difference is Kerberos 
mechanism is more general; it can be used by any Kerberized clients without 
install changes. Whereas here we need to pre-configure our special tokens in a 
pair-wise manner for any pair of entities that we need to enable authenticated 
connections. Hence, its usefulness is limited to pre-known services, not 
arbitrary clients.

[~revans2] and [~daryn], I agree we should allow SIMPLE auth to be coupled 
with tokens.

[~owen.omalley], actually I'm proposing we use the same authentication 
mechanism as existing delegation tokens use (yes, we may need to consider 
upgrading the mechanism at some point and it depends on what's available in 
standard libs). What's new here is a new type of credentials, hence a new type 
of SecretManager to deal with it. Since the SecretManagers will be stacked 
together and deal with different types of credentials, they are orthogonal and 
can evolve independently. HTTP could be used for fetching tokens, but the root 
issue here is which auth method to use for intra-cluster authentication (ex. 
between Gateway and NN) and preferably we don't want to introduce an external 
dependency (ex, LDAP or Kerberos KDC). We could use a shared key based 
mechanism on HTTP, but that doesn't save us any trouble in configuring the 
shared keys. We might as well use RPC with its existing token framework.

[~tucu00] Agree with decoupling intra-cluster authentication from user-facing 
authentication. This JIRA is a step in that direction. Regarding consolidating 
different token implementations, I see the benefit and have discussed it with 
you and others. Let's leave it to another topic.

 Support for pluggable token implementations
 ---

 Key: HADOOP-8758
 URL: https://issues.apache.org/jira/browse/HADOOP-8758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Reporter: Kan Zhang
Assignee: Kan Zhang

 Variants of the delegation token mechanism have been employed by different 
 Hadoop services (NN, JT, RM, etc) to re-authenticate a previously 
 Kerberos-authenticated client. While existing delegation token mechanism 
 compliments Kerberos well, it doesn't necessarily have to be coupled with 
 Kerberos. In principle, delegation tokens can be coupled with any 
 authentication mechanism that bootstraps security. In particular, it can be 
 coupled with other token implementations that use the same DIGEST-MD5 auth 
 method. For example, a token can be pre-generated in an out-of-band manner 
 and configured as a shared secret key between NN and JT to allow JT to make 
 initial authentication to NN. This simple example doesn't deal with token 
 renewal etc, but it helps to illustrate the point that if we can support 
 multiple pluggable token implementations, it opens up the possibility for 
 different users to plug in the token implementation of their choice to 
 bootstrap security. Such token based mechanism has advantages over Kerberos 
 in that 1) it doesn't require Kerberos infrastructure, 2) it leverages 
 existing SASL DIGEST-MD5 auth method and doesn't require adding a new RPC 
 auth method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8749) HADOOP-8031 changed the way in which relative xincludes are handled in Configuration.

2012-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448523#comment-13448523
 ] 

Hadoop QA commented on HADOOP-8749:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12543777/HADOOP-8749_rev2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.conf.TestConfiguration

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1400//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1400//console

This message is automatically generated.

 HADOOP-8031 changed the way in which relative xincludes are handled in 
 Configuration.
 -

 Key: HADOOP-8749
 URL: https://issues.apache.org/jira/browse/HADOOP-8749
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Ahmed Radwan
Assignee: Ahmed Radwan
 Attachments: HADOOP-8749.patch, HADOOP-8749_rev2.patch


 The patch from HADOOP-8031 changed the xml parsing to use 
 DocumentBuilder#parse(InputStream uri.openStream()) instead of 
 DocumentBuilder#parse(String uri.toString()).I looked into the implementation 
 of javax.xml.parsers.DocumentBuilder and org.xml.sax.InputSource and there is 
 a difference when the DocumentBuilder parse(String) method is used versus 
 parse(InputStream).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8758) Support for pluggable token implementations

2012-09-05 Thread Kan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448554#comment-13448554
 ] 

Kan Zhang commented on HADOOP-8758:
---

[~owen.omalley], on second thought, I think you do have a point on HTTP option. 
We'll explore more in that direction.

 Support for pluggable token implementations
 ---

 Key: HADOOP-8758
 URL: https://issues.apache.org/jira/browse/HADOOP-8758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Reporter: Kan Zhang
Assignee: Kan Zhang

 Variants of the delegation token mechanism have been employed by different 
 Hadoop services (NN, JT, RM, etc) to re-authenticate a previously 
 Kerberos-authenticated client. While existing delegation token mechanism 
 compliments Kerberos well, it doesn't necessarily have to be coupled with 
 Kerberos. In principle, delegation tokens can be coupled with any 
 authentication mechanism that bootstraps security. In particular, it can be 
 coupled with other token implementations that use the same DIGEST-MD5 auth 
 method. For example, a token can be pre-generated in an out-of-band manner 
 and configured as a shared secret key between NN and JT to allow JT to make 
 initial authentication to NN. This simple example doesn't deal with token 
 renewal etc, but it helps to illustrate the point that if we can support 
 multiple pluggable token implementations, it opens up the possibility for 
 different users to plug in the token implementation of their choice to 
 bootstrap security. Such token based mechanism has advantages over Kerberos 
 in that 1) it doesn't require Kerberos infrastructure, 2) it leverages 
 existing SASL DIGEST-MD5 auth method and doesn't require adding a new RPC 
 auth method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8762) Mark container-provided dependencies with 'provided' scope

2012-09-05 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-8762:
--

Assignee: Tom White
  Status: Patch Available  (was: Open)

 Mark container-provided dependencies with 'provided' scope
 --

 Key: HADOOP-8762
 URL: https://issues.apache.org/jira/browse/HADOOP-8762
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8762.patch


 For example the Tomcat and Jetty dependencies should be 'provided' since they 
 are provided by the container (i.e. the Hadoop installation).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8749) HADOOP-8031 changed the way in which relative xincludes are handled in Configuration.

2012-09-05 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-8749:
-

Attachment: HADOOP-8749_rev3.patch

I missed adding the tmp directory creation to the previous patch (I had it 
locally and this is why it was successfully running). Here is the updated 
patch. The TestZKFailoverController failures seems unrelated (see HADOOP-8591).

 HADOOP-8031 changed the way in which relative xincludes are handled in 
 Configuration.
 -

 Key: HADOOP-8749
 URL: https://issues.apache.org/jira/browse/HADOOP-8749
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Ahmed Radwan
Assignee: Ahmed Radwan
 Attachments: HADOOP-8749.patch, HADOOP-8749_rev2.patch, 
 HADOOP-8749_rev3.patch


 The patch from HADOOP-8031 changed the xml parsing to use 
 DocumentBuilder#parse(InputStream uri.openStream()) instead of 
 DocumentBuilder#parse(String uri.toString()).I looked into the implementation 
 of javax.xml.parsers.DocumentBuilder and org.xml.sax.InputSource and there is 
 a difference when the DocumentBuilder parse(String) method is used versus 
 parse(InputStream).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8762) Mark container-provided dependencies with 'provided' scope

2012-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448572#comment-13448572
 ] 

Hadoop QA commented on HADOOP-8762:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543721/HADOOP-8762.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javac.  The patch appears to cause the build to fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1401//console

This message is automatically generated.

 Mark container-provided dependencies with 'provided' scope
 --

 Key: HADOOP-8762
 URL: https://issues.apache.org/jira/browse/HADOOP-8762
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8762.patch


 For example the Tomcat and Jetty dependencies should be 'provided' since they 
 are provided by the container (i.e. the Hadoop installation).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8749) HADOOP-8031 changed the way in which relative xincludes are handled in Configuration.

2012-09-05 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-8749:
-

Attachment: HADOOP-8749_rev4.patch

I have modified the test case again to have a proper way of cleaning up the 
created directory.

 HADOOP-8031 changed the way in which relative xincludes are handled in 
 Configuration.
 -

 Key: HADOOP-8749
 URL: https://issues.apache.org/jira/browse/HADOOP-8749
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Ahmed Radwan
Assignee: Ahmed Radwan
 Attachments: HADOOP-8749.patch, HADOOP-8749_rev2.patch, 
 HADOOP-8749_rev3.patch, HADOOP-8749_rev4.patch


 The patch from HADOOP-8031 changed the xml parsing to use 
 DocumentBuilder#parse(InputStream uri.openStream()) instead of 
 DocumentBuilder#parse(String uri.toString()).I looked into the implementation 
 of javax.xml.parsers.DocumentBuilder and org.xml.sax.InputSource and there is 
 a difference when the DocumentBuilder parse(String) method is used versus 
 parse(InputStream).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8765) LocalDirAllocator.ifExists API is broken and unused

2012-09-05 Thread Hemanth Yamijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Yamijala updated HADOOP-8765:
-

Status: Patch Available  (was: Open)

 LocalDirAllocator.ifExists API is broken and unused
 ---

 Key: HADOOP-8765
 URL: https://issues.apache.org/jira/browse/HADOOP-8765
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8765.patch


 LocalDirAllocator.ifExists calls AllocatorPerContext.ifExists, which accesses 
 the localDirsPath variable that is uninitialised. Hence, any code that uses 
 this API is likely to fail with a NullPointerException.
 However, this API is currently not used anywhere else in trunk. The earlier 
 usage was in IsolationRunner, that has since been removed via MAPREDUCE-2606.
 Hence, we could remove this API for trunk, and if required, fix it for the 
 1.x branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8749) HADOOP-8031 changed the way in which relative xincludes are handled in Configuration.

2012-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448593#comment-13448593
 ] 

Hadoop QA commented on HADOOP-8749:
---

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12543828/HADOOP-8749_rev4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1403//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1403//console

This message is automatically generated.

 HADOOP-8031 changed the way in which relative xincludes are handled in 
 Configuration.
 -

 Key: HADOOP-8749
 URL: https://issues.apache.org/jira/browse/HADOOP-8749
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Ahmed Radwan
Assignee: Ahmed Radwan
 Attachments: HADOOP-8749.patch, HADOOP-8749_rev2.patch, 
 HADOOP-8749_rev3.patch, HADOOP-8749_rev4.patch


 The patch from HADOOP-8031 changed the xml parsing to use 
 DocumentBuilder#parse(InputStream uri.openStream()) instead of 
 DocumentBuilder#parse(String uri.toString()).I looked into the implementation 
 of javax.xml.parsers.DocumentBuilder and org.xml.sax.InputSource and there is 
 a difference when the DocumentBuilder parse(String) method is used versus 
 parse(InputStream).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8765) LocalDirAllocator.ifExists API is broken and unused

2012-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448599#comment-13448599
 ] 

Hadoop QA commented on HADOOP-8765:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543815/HADOOP-8765.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1404//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1404//console

This message is automatically generated.

 LocalDirAllocator.ifExists API is broken and unused
 ---

 Key: HADOOP-8765
 URL: https://issues.apache.org/jira/browse/HADOOP-8765
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8765.patch


 LocalDirAllocator.ifExists calls AllocatorPerContext.ifExists, which accesses 
 the localDirsPath variable that is uninitialised. Hence, any code that uses 
 this API is likely to fail with a NullPointerException.
 However, this API is currently not used anywhere else in trunk. The earlier 
 usage was in IsolationRunner, that has since been removed via MAPREDUCE-2606.
 Hence, we could remove this API for trunk, and if required, fix it for the 
 1.x branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8765) LocalDirAllocator.ifExists API is broken and unused

2012-09-05 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448603#comment-13448603
 ] 

Hemanth Yamijala commented on HADOOP-8765:
--

bq. -1 tests included. The patch doesn't appear to include any new or modified 
tests.
bq. Please justify why no new tests are needed for this patch.
bq. Also please list what manual steps were performed to verify this patch.

The patch removes code. Hence, no new tests.

 LocalDirAllocator.ifExists API is broken and unused
 ---

 Key: HADOOP-8765
 URL: https://issues.apache.org/jira/browse/HADOOP-8765
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8765.patch


 LocalDirAllocator.ifExists calls AllocatorPerContext.ifExists, which accesses 
 the localDirsPath variable that is uninitialised. Hence, any code that uses 
 this API is likely to fail with a NullPointerException.
 However, this API is currently not used anywhere else in trunk. The earlier 
 usage was in IsolationRunner, that has since been removed via MAPREDUCE-2606.
 Hence, we could remove this API for trunk, and if required, fix it for the 
 1.x branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread giovanni delussu (JIRA)
giovanni delussu created HADOOP-8767:


 Summary: secondary namenode on slave machines
 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor


when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
starting (with start-dfs.sh) creates secondary namenodes on all the machines in 
the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread giovanni delussu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

giovanni delussu updated HADOOP-8767:
-

Status: Patch Available  (was: Open)

 secondary namenode on slave machines
 

 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor

 when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
 starting (with start-dfs.sh) creates secondary namenodes on all the machines 
 in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread giovanni delussu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

giovanni delussu updated HADOOP-8767:
-

Status: Open  (was: Patch Available)

 secondary namenode on slave machines
 

 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor

 when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
 starting (with start-dfs.sh) creates secondary namenodes on all the machines 
 in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448681#comment-13448681
 ] 

Hudson commented on HADOOP-8737:


Integrated in Hadoop-Hdfs-trunk #1156 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1156/])
HADOOP-8764. CMake: HADOOP-8737 broke ARM build. Contributed by Trevor 
Robinson (Revision 1380984)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380984
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8737.001.patch, HADOOP-8737.002.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8764) CMake: HADOOP-8737 broke ARM build

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448680#comment-13448680
 ] 

Hudson commented on HADOOP-8764:


Integrated in Hadoop-Hdfs-trunk #1156 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1156/])
HADOOP-8764. CMake: HADOOP-8737 broke ARM build. Contributed by Trevor 
Robinson (Revision 1380984)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380984
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 CMake: HADOOP-8737 broke ARM build
 --

 Key: HADOOP-8764
 URL: https://issues.apache.org/jira/browse/HADOOP-8764
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_06, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_06/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-1000-highbank, arch: arm, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8764.patch


 ARM build is broken again: CMAKE_SYSTEM_PROCESSOR comes from {{uname -p}}, 
 which reports values like armv7l for the ARMv7 architecture. However, the 
 OpenJDK and Oracle ARM JREs both use jre/lib/arm for the JVM directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8758) Support for pluggable token implementations

2012-09-05 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448717#comment-13448717
 ] 

Daryn Sharp commented on HADOOP-8758:
-

Kan, this is a great discusson.  Would you clarify the references to needing 
multiple secret managers?  I think we should be able to support multiple auth 
types with the existing secret managers.  The managers are just an authz 
mechanism that is or should be decoupled from auth.  That said, what additional 
functionality would these other secret managers provide?

 Support for pluggable token implementations
 ---

 Key: HADOOP-8758
 URL: https://issues.apache.org/jira/browse/HADOOP-8758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Reporter: Kan Zhang
Assignee: Kan Zhang

 Variants of the delegation token mechanism have been employed by different 
 Hadoop services (NN, JT, RM, etc) to re-authenticate a previously 
 Kerberos-authenticated client. While existing delegation token mechanism 
 compliments Kerberos well, it doesn't necessarily have to be coupled with 
 Kerberos. In principle, delegation tokens can be coupled with any 
 authentication mechanism that bootstraps security. In particular, it can be 
 coupled with other token implementations that use the same DIGEST-MD5 auth 
 method. For example, a token can be pre-generated in an out-of-band manner 
 and configured as a shared secret key between NN and JT to allow JT to make 
 initial authentication to NN. This simple example doesn't deal with token 
 renewal etc, but it helps to illustrate the point that if we can support 
 multiple pluggable token implementations, it opens up the possibility for 
 different users to plug in the token implementation of their choice to 
 bootstrap security. Such token based mechanism has advantages over Kerberos 
 in that 1) it doesn't require Kerberos infrastructure, 2) it leverages 
 existing SASL DIGEST-MD5 auth method and doesn't require adding a new RPC 
 auth method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8764) CMake: HADOOP-8737 broke ARM build

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448744#comment-13448744
 ] 

Hudson commented on HADOOP-8764:


Integrated in Hadoop-Mapreduce-trunk #1187 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1187/])
HADOOP-8764. CMake: HADOOP-8737 broke ARM build. Contributed by Trevor 
Robinson (Revision 1380984)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380984
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 CMake: HADOOP-8737 broke ARM build
 --

 Key: HADOOP-8764
 URL: https://issues.apache.org/jira/browse/HADOOP-8764
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_06, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_06/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-1000-highbank, arch: arm, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8764.patch


 ARM build is broken again: CMAKE_SYSTEM_PROCESSOR comes from {{uname -p}}, 
 which reports values like armv7l for the ARMv7 architecture. However, the 
 OpenJDK and Oracle ARM JREs both use jre/lib/arm for the JVM directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448745#comment-13448745
 ] 

Hudson commented on HADOOP-8737:


Integrated in Hadoop-Mapreduce-trunk #1187 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1187/])
HADOOP-8764. CMake: HADOOP-8737 broke ARM build. Contributed by Trevor 
Robinson (Revision 1380984)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380984
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8737.001.patch, HADOOP-8737.002.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread giovanni delussu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

giovanni delussu updated HADOOP-8767:
-

Attachment: patch_slaves.sh_hadoop-1.0.3_fromtar.patch
patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch

patch for tar version hadoop-1.0.3.tar.gz
file patched are:
hadoop-config.sh
slaves.sh

 secondary namenode on slave machines
 

 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor
 Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
 patch_slaves.sh_hadoop-1.0.3_fromtar.patch


 when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
 starting (with start-dfs.sh) creates secondary namenodes on all the machines 
 in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread giovanni delussu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

giovanni delussu updated HADOOP-8767:
-

Fix Version/s: 1.0.3
   Status: Patch Available  (was: Open)

 secondary namenode on slave machines
 

 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor
 Fix For: 1.0.3

 Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
 patch_slaves.sh_hadoop-1.0.3_fromtar.patch


 when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
 starting (with start-dfs.sh) creates secondary namenodes on all the machines 
 in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448771#comment-13448771
 ] 

Hadoop QA commented on HADOOP-8767:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12543859/patch_slaves.sh_hadoop-1.0.3_fromtar.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1405//console

This message is automatically generated.

 secondary namenode on slave machines
 

 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor
 Fix For: 1.0.3

 Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
 patch_slaves.sh_hadoop-1.0.3_fromtar.patch


 when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
 starting (with start-dfs.sh) creates secondary namenodes on all the machines 
 in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-8768) TestDistCp is @ignored

2012-09-05 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins moved HDFS-3865 to HADOOP-8768:
---

  Component/s: (was: tools)
 Target Version/s:   (was: 2.2.0-alpha)
Affects Version/s: (was: 2.2.0-alpha)
   2.2.0-alpha
   Issue Type: Bug  (was: Test)
  Key: HADOOP-8768  (was: HDFS-3865)
  Project: Hadoop Common  (was: Hadoop HDFS)

 TestDistCp is @ignored
 --

 Key: HADOOP-8768
 URL: https://issues.apache.org/jira/browse/HADOOP-8768
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Priority: Minor

 We should fix TestDistCp so that it actually runs, rather than being ignored.
 {code}
 @ignore
 public class TestDistCp {
   private static final Log LOG = LogFactory.getLog(TestDistCp.class);
   private static ListPath pathList = new ArrayListPath();
   ...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8768) TestDistCp is @ignored

2012-09-05 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8768:


Component/s: test

 TestDistCp is @ignored
 --

 Key: HADOOP-8768
 URL: https://issues.apache.org/jira/browse/HADOOP-8768
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, tools/distcp
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Priority: Minor

 We should fix TestDistCp so that it actually runs, rather than being ignored.
 {code}
 @ignore
 public class TestDistCp {
   private static final Log LOG = LogFactory.getLog(TestDistCp.class);
   private static ListPath pathList = new ArrayListPath();
   ...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8768) TestDistCp is @ignored

2012-09-05 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8768:


Component/s: tools/distcp

 TestDistCp is @ignored
 --

 Key: HADOOP-8768
 URL: https://issues.apache.org/jira/browse/HADOOP-8768
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, tools/distcp
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Priority: Minor

 We should fix TestDistCp so that it actually runs, rather than being ignored.
 {code}
 @ignore
 public class TestDistCp {
   private static final Log LOG = LogFactory.getLog(TestDistCp.class);
   private static ListPath pathList = new ArrayListPath();
   ...
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8764) CMake: HADOOP-8737 broke ARM build

2012-09-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448928#comment-13448928
 ] 

Colin Patrick McCabe commented on HADOOP-8764:
--

This patch looks reasonable to me (although I do not have an ARM device to test 
on).

Thanks for looking at this, Trevor.

 CMake: HADOOP-8737 broke ARM build
 --

 Key: HADOOP-8764
 URL: https://issues.apache.org/jira/browse/HADOOP-8764
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_06, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_06/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-1000-highbank, arch: arm, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8764.patch


 ARM build is broken again: CMAKE_SYSTEM_PROCESSOR comes from {{uname -p}}, 
 which reports values like armv7l for the ARMv7 architecture. However, the 
 OpenJDK and Oracle ARM JREs both use jre/lib/arm for the JVM directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8769) Tests failures on the ARM hosts

2012-09-05 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8769:
---

 Summary: Tests failures on the ARM hosts 
 Key: HADOOP-8769
 URL: https://issues.apache.org/jira/browse/HADOOP-8769
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins


I created a [jenkins job|https://builds.apache.org/job/Hadoop-trunk-ARM] that 
runs on the ARM machines. The local build is now working and running tests 
(thanks Gavin!), however there are 40 test failures, looks like most are due to 
host configuration issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8769) Tests failures on the ARM hosts

2012-09-05 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448948#comment-13448948
 ] 

Eli Collins commented on HADOOP-8769:
-

Here's a recent run: 
https://builds.apache.org/job/Hadoop-trunk-ARM/lastCompletedBuild/testReport

 Tests failures on the ARM hosts 
 

 Key: HADOOP-8769
 URL: https://issues.apache.org/jira/browse/HADOOP-8769
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins

 I created a [jenkins job|https://builds.apache.org/job/Hadoop-trunk-ARM] that 
 runs on the ARM machines. The local build is now working and running tests 
 (thanks Gavin!), however there are 40 test failures, looks like most are due 
 to host configuration issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13448958#comment-13448958
 ] 

Arpit Gupta commented on HADOOP-8767:
-

Patch does not seem to be correctly formatted. For details on take a look at 
http://wiki.apache.org/hadoop/HowToContribute or if you use git then 
http://wiki.apache.org/hadoop/GitAndHadoop


A few comments on your patch.

Rather than assigning HOSTLIST after hadoop-env.sh is sourced we should do the 
following

1. in hadoop-config.sh source the hadoop-env.sh before additional parameters 
are processed such as --hosts. This way if you specify the hosts at command 
line they will get set appropriately. This will help as in start-dfs.sh the 
secondary namenode is being started as

{code}
$bin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start 
secondarynamenode
{code}


2. in slaves.sh remove the source of hadoop-env.sh as it has already been 
sourced by hadoop-config.sh. Thus the assignment of HADOOP_SLAVES done by 
hadoop-config.sh will be available rather than being overwritten by 
hadoop-env.sh


 secondary namenode on slave machines
 

 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor
 Fix For: 1.0.3

 Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
 patch_slaves.sh_hadoop-1.0.3_fromtar.patch


 when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
 starting (with start-dfs.sh) creates secondary namenodes on all the machines 
 in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-8770) NN should not RPC to self to find trash defaults (causes deadlock)

2012-09-05 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins moved HDFS-3876 to HADOOP-8770:
---

  Component/s: (was: name-node)
   trash
 Target Version/s: 2.2.0-alpha  (was: 2.2.0-alpha)
Affects Version/s: (was: 2.2.0-alpha)
   2.2.0-alpha
  Key: HADOOP-8770  (was: HDFS-3876)
  Project: Hadoop Common  (was: Hadoop HDFS)

 NN should not RPC to self to find trash defaults (causes deadlock)
 --

 Key: HADOOP-8770
 URL: https://issues.apache.org/jira/browse/HADOOP-8770
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 2.2.0-alpha
Reporter: Todd Lipcon
Assignee: Eli Collins
Priority: Blocker
 Attachments: hdfs-3876.txt, hdfs-3876.txt, hdfs-3876.txt, 
 hdfs-3876.txt


 When transitioning a SBN to active, I ran into the following situation:
 - the TrashPolicy first gets loaded by an IPC Server Handler thread. The 
 {{initialize}} function then tries to make an RPC to the same node to find 
 out the defaults.
 - This is happening inside the NN write lock (since it's part of the active 
 initialization). Hence, all of the other handler threads are already blocked 
 waiting to get the NN lock.
 - Since no handler threads are free, the RPC blocks forever and the NN never 
 enters active state.
 We need to have a general policy that the NN should never make RPCs to itself 
 for any reason, due to potential for deadlocks like this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8624) ProtobufRpcEngine should log all RPCs if TRACE logging is enabled

2012-09-05 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449042#comment-13449042
 ] 

Eli Collins commented on HADOOP-8624:
-

This can be closed right?

 ProtobufRpcEngine should log all RPCs if TRACE logging is enabled
 -

 Key: HADOOP-8624
 URL: https://issues.apache.org/jira/browse/HADOOP-8624
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.2.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8624.txt


 Since all RPC requests/responses are now ProtoBufs, it's easy to add a TRACE 
 level logging output for ProtobufRpcEngine that actually shows the full 
 content of all calls. This is very handy especially when writing/debugging 
 unit tests, but might also be useful to enable at runtime for short periods 
 of time to debug certain production issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8770) NN should not RPC to self to find trash defaults (causes deadlock)

2012-09-05 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8770:


  Resolution: Fixed
   Fix Version/s: 2.2.0-alpha
Target Version/s:   (was: 2.2.0-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review Todd. I've committed this and merged to branch-2. 

 NN should not RPC to self to find trash defaults (causes deadlock)
 --

 Key: HADOOP-8770
 URL: https://issues.apache.org/jira/browse/HADOOP-8770
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 2.2.0-alpha
Reporter: Todd Lipcon
Assignee: Eli Collins
Priority: Blocker
 Fix For: 2.2.0-alpha

 Attachments: hdfs-3876.txt, hdfs-3876.txt, hdfs-3876.txt, 
 hdfs-3876.txt


 When transitioning a SBN to active, I ran into the following situation:
 - the TrashPolicy first gets loaded by an IPC Server Handler thread. The 
 {{initialize}} function then tries to make an RPC to the same node to find 
 out the defaults.
 - This is happening inside the NN write lock (since it's part of the active 
 initialization). Hence, all of the other handler threads are already blocked 
 waiting to get the NN lock.
 - Since no handler threads are free, the RPC blocks forever and the NN never 
 enters active state.
 We need to have a general policy that the NN should never make RPCs to itself 
 for any reason, due to potential for deadlocks like this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8771) Correct CHANGES.txt file

2012-09-05 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8771:
---

 Summary: Correct CHANGES.txt file
 Key: HADOOP-8771
 URL: https://issues.apache.org/jira/browse/HADOOP-8771
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Blocker


The trunk CHANGES.txt files have a bunch of stuff listed under trunk that's 
actually already in branch-2, let's move that down to the branch-2 section. 
Also, the branch-2 files need to be updated (currently says Release 
2.0.1-alpha - UNRELEASED even though 2.0.1 was already released).


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8770) NN should not RPC to self to find trash defaults (causes deadlock)

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449064#comment-13449064
 ] 

Hudson commented on HADOOP-8770:


Integrated in Hadoop-Common-trunk-Commit #2687 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2687/])
HADOOP-8770. NN should not RPC to self to find trash defaults. Contributed 
by Eli Collins (Revision 1381319)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381319
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Trash.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSTrash.java


 NN should not RPC to self to find trash defaults (causes deadlock)
 --

 Key: HADOOP-8770
 URL: https://issues.apache.org/jira/browse/HADOOP-8770
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 2.2.0-alpha
Reporter: Todd Lipcon
Assignee: Eli Collins
Priority: Blocker
 Fix For: 2.2.0-alpha

 Attachments: hdfs-3876.txt, hdfs-3876.txt, hdfs-3876.txt, 
 hdfs-3876.txt


 When transitioning a SBN to active, I ran into the following situation:
 - the TrashPolicy first gets loaded by an IPC Server Handler thread. The 
 {{initialize}} function then tries to make an RPC to the same node to find 
 out the defaults.
 - This is happening inside the NN write lock (since it's part of the active 
 initialization). Hence, all of the other handler threads are already blocked 
 waiting to get the NN lock.
 - Since no handler threads are free, the RPC blocks forever and the NN never 
 enters active state.
 We need to have a general policy that the NN should never make RPCs to itself 
 for any reason, due to potential for deadlocks like this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8770) NN should not RPC to self to find trash defaults (causes deadlock)

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449066#comment-13449066
 ] 

Hudson commented on HADOOP-8770:


Integrated in Hadoop-Hdfs-trunk-Commit #2750 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2750/])
HADOOP-8770. NN should not RPC to self to find trash defaults. Contributed 
by Eli Collins (Revision 1381319)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381319
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Trash.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSTrash.java


 NN should not RPC to self to find trash defaults (causes deadlock)
 --

 Key: HADOOP-8770
 URL: https://issues.apache.org/jira/browse/HADOOP-8770
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 2.2.0-alpha
Reporter: Todd Lipcon
Assignee: Eli Collins
Priority: Blocker
 Fix For: 2.2.0-alpha

 Attachments: hdfs-3876.txt, hdfs-3876.txt, hdfs-3876.txt, 
 hdfs-3876.txt


 When transitioning a SBN to active, I ran into the following situation:
 - the TrashPolicy first gets loaded by an IPC Server Handler thread. The 
 {{initialize}} function then tries to make an RPC to the same node to find 
 out the defaults.
 - This is happening inside the NN write lock (since it's part of the active 
 initialization). Hence, all of the other handler threads are already blocked 
 waiting to get the NN lock.
 - Since no handler threads are free, the RPC blocks forever and the NN never 
 enters active state.
 We need to have a general policy that the NN should never make RPCs to itself 
 for any reason, due to potential for deadlocks like this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8624) ProtobufRpcEngine should log all RPCs if TRACE logging is enabled

2012-09-05 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8624:


   Resolution: Fixed
Fix Version/s: 2.2.0-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Yep, I think this was committed during one of the JIRA outages last month. IT 
was committed in branch-2@1366128 and trunk@1366127

 ProtobufRpcEngine should log all RPCs if TRACE logging is enabled
 -

 Key: HADOOP-8624
 URL: https://issues.apache.org/jira/browse/HADOOP-8624
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.2.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: hadoop-8624.txt


 Since all RPC requests/responses are now ProtoBufs, it's easy to add a TRACE 
 level logging output for ProtobufRpcEngine that actually shows the full 
 content of all calls. This is very handy especially when writing/debugging 
 unit tests, but might also be useful to enable at runtime for short periods 
 of time to debug certain production issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8766:
-

Attachment: HADOOP-8766.001.patch

 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: newbie
 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8766:
-

Status: Patch Available  (was: Open)

 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: newbie
 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reassigned HADOOP-8766:


Assignee: Colin Patrick McCabe

 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449109#comment-13449109
 ] 

Colin Patrick McCabe commented on HADOOP-8766:
--

It's easier just to use {{System.getProperty(test.build.data)}}.  (that's the 
approach I took here)

 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449108#comment-13449108
 ] 

Eli Collins commented on HADOOP-8648:
-

The new native_tests execution should be in the native profile not the 
ApacheDS KDC server profile. Otherwise looks great.

Excellent work fixing and thanks Andy for the detailed explanation.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8772) RawLocalFileStatus shells out for permission info

2012-09-05 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8772:
---

 Summary: RawLocalFileStatus shells out for permission info
 Key: HADOOP-8772
 URL: https://issues.apache.org/jira/browse/HADOOP-8772
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.3, 3.0.0, 2.2.0-alpha
Reporter: Daryn Sharp
Priority: Critical


{{RawLocalFileStatus}} shells out to run ls to get permissions info.  This 
very inefficient.

More importantly, mixing mulithreading and forking is risky.  Some version of 
glibc in RHEL will deadlock things such as __GI__IO_list_lock and 
malloc_atfork.  All this unnecessary shelling out to access the local 
filesystem greatly increases the risk of deadlock.

Namely, the NM's user localizer is seen to jam more frequently than the TT  NM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8648:
-

Attachment: HADOOP-8648.005.patch

* add test_bulk_crc32 execution to the native profile, not the kerberos 
profile.  (Good catch, Eli.)

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8770) NN should not RPC to self to find trash defaults (causes deadlock)

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449151#comment-13449151
 ] 

Hudson commented on HADOOP-8770:


Integrated in Hadoop-Mapreduce-trunk-Commit #2711 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2711/])
HADOOP-8770. NN should not RPC to self to find trash defaults. Contributed 
by Eli Collins (Revision 1381319)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381319
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Trash.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSTrash.java


 NN should not RPC to self to find trash defaults (causes deadlock)
 --

 Key: HADOOP-8770
 URL: https://issues.apache.org/jira/browse/HADOOP-8770
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 2.2.0-alpha
Reporter: Todd Lipcon
Assignee: Eli Collins
Priority: Blocker
 Fix For: 2.2.0-alpha

 Attachments: hdfs-3876.txt, hdfs-3876.txt, hdfs-3876.txt, 
 hdfs-3876.txt


 When transitioning a SBN to active, I ran into the following situation:
 - the TrashPolicy first gets loaded by an IPC Server Handler thread. The 
 {{initialize}} function then tries to make an RPC to the same node to find 
 out the defaults.
 - This is happening inside the NN write lock (since it's part of the active 
 initialization). Hence, all of the other handler threads are already blocked 
 waiting to get the NN lock.
 - Since no handler threads are free, the RPC blocks forever and the NN never 
 enters active state.
 We need to have a general policy that the NN should never make RPCs to itself 
 for any reason, due to potential for deadlocks like this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449153#comment-13449153
 ] 

Hadoop QA commented on HADOOP-8648:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543909/HADOOP-8648.005.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1408//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1408//console

This message is automatically generated.

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449160#comment-13449160
 ] 

Hadoop QA commented on HADOOP-8766:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12543905/HADOOP-8766.001.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1407//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1407//console

This message is automatically generated.

 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449192#comment-13449192
 ] 

Eli Collins commented on HADOOP-8648:
-

+1 lgtm

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449207#comment-13449207
 ] 

Hudson commented on HADOOP-8648:


Integrated in Hadoop-Common-trunk-Commit #2688 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2688/])
HADOOP-8648. libhadoop: native CRC32 validation crashes when 
io.bytes.per.checksum=1. Contributed by Colin Patrick McCabe (Revision 1381419)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381419
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c


 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449212#comment-13449212
 ] 

Hudson commented on HADOOP-8648:


Integrated in Hadoop-Hdfs-trunk-Commit #2751 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2751/])
HADOOP-8648. libhadoop: native CRC32 validation crashes when 
io.bytes.per.checksum=1. Contributed by Colin Patrick McCabe (Revision 1381419)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381419
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c


 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8648:


  Resolution: Fixed
   Fix Version/s: 2.2.0-alpha
Target Version/s:   (was: 2.2.0-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks Colin!

 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8664) hadoop streaming job need the full path to commands even when they are in the path

2012-09-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449237#comment-13449237
 ] 

Suresh Srinivas commented on HADOOP-8664:
-

+1 for the patch. I will commit this to branch-1-win

 hadoop streaming job need the full path to commands even when they are in the 
 path
 --

 Key: HADOOP-8664
 URL: https://issues.apache.org/jira/browse/HADOOP-8664
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8664.branch-1-win.1.patch


 run a hadoop streaming job as
 bin/hadoop jar path_to_streaming_jar -input path_on_hdfs -mapper cat -output 
 path_on_hdfs -reducer cat
 will fail saying program cat not found. cat is in the path and works from cmd 
 prompt.
 If i give the full path to cmd.exe the exception is not seen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8664) hadoop streaming job need the full path to commands even when they are in the path

2012-09-05 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8664:


   Resolution: Fixed
Fix Version/s: 1-win
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch. Thank you Bikas.

 hadoop streaming job need the full path to commands even when they are in the 
 path
 --

 Key: HADOOP-8664
 URL: https://issues.apache.org/jira/browse/HADOOP-8664
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 1-win

 Attachments: HADOOP-8664.branch-1-win.1.patch


 run a hadoop streaming job as
 bin/hadoop jar path_to_streaming_jar -input path_on_hdfs -mapper cat -output 
 path_on_hdfs -reducer cat
 will fail saying program cat not found. cat is in the path and works from cmd 
 prompt.
 If i give the full path to cmd.exe the exception is not seen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449244#comment-13449244
 ] 

Hudson commented on HADOOP-8648:


Integrated in Hadoop-Mapreduce-trunk-Commit #2712 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2712/])
HADOOP-8648. libhadoop: native CRC32 validation crashes when 
io.bytes.per.checksum=1. Contributed by Colin Patrick McCabe (Revision 1381419)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381419
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/test/org/apache/hadoop/util/test_bulk_crc32.c


 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449247#comment-13449247
 ] 

Aaron T. Myers commented on HADOOP-8766:


+1, the patch looks good to me. I'm going to commit this momentarily.

 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8766:
---

   Resolution: Fixed
Fix Version/s: 2.2.0-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Colin.

 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449254#comment-13449254
 ] 

Hudson commented on HADOOP-8766:


Integrated in Hadoop-Hdfs-trunk-Commit #2752 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2752/])
HADOOP-8766. FileContextMainOperationsBaseTest should randomize the root 
dir. Contributed by Colin Patrick McCabe. (Revision 1381437)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381437
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java


 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449258#comment-13449258
 ] 

Hudson commented on HADOOP-8766:


Integrated in Hadoop-Common-trunk-Commit #2689 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2689/])
HADOOP-8766. FileContextMainOperationsBaseTest should randomize the root 
dir. Contributed by Colin Patrick McCabe. (Revision 1381437)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381437
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java


 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-09-05 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449268#comment-13449268
 ] 

Ivan Mitic commented on HADOOP-8457:


Thanks for commenting Sanjay.

I don't like UGI being exposed further. Can we change the api to take user or 
usergroup?
bq. Passing along UGI seemed reasonable as we need information about both user 
name and user groups. We can change the API to accept a username and a list of 
user groups, however, I personally like the current approach better as it 
encapsulates the user info. This can become useful at some later time. Do let 
me know if you think differently and I'll change the API to accept usergroups.

 Address file ownership issue for users in Administrators group on Windows.
 --

 Key: HADOOP-8457
 URL: https://issues.apache.org/jira/browse/HADOOP-8457
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.0, 0.24.0
Reporter: Chuan Liu
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
 HADOOP-8457-branch-1-win_Admins.patch


 On Linux, the initial file owners are the creators. (I think this is true in 
 general. If there are exceptions, please let me know.) On Windows, the file 
 created by a user in the Administrators group has the initial owner 
 ‘Administrators’, i.e. the the Administrators group is the initial owner of 
 the file. As a result, this leads to an exception when we check file 
 ownership in SecureIOUtils .checkStat() method. As a result, this method is 
 disabled right now. We need to address this problem and enable the method on 
 Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-09-05 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449284#comment-13449284
 ] 

Ivan Mitic commented on HADOOP-8733:


Thanks for reviewing Bikas.

bq. Makes sense to run LTC test when Shell.LINUX instead of when 
!Shell.WINDOWS? I think it reads better
Sure. Just to confirm, this would prevent running the test on all platforms 
that are not Linux (Mac for example). Is something we want?

 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-09-05 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449288#comment-13449288
 ] 

Bikas Saha commented on HADOOP-8733:


LTC does not run on anything other than Linux.

 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-09-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449294#comment-13449294
 ] 

Hudson commented on HADOOP-8766:


Integrated in Hadoop-Mapreduce-trunk-Commit #2713 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2713/])
HADOOP-8766. FileContextMainOperationsBaseTest should randomize the root 
dir. Contributed by Colin Patrick McCabe. (Revision 1381437)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381437
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java


 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8431) Running distcp wo args throws IllegalArgumentException

2012-09-05 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449320#comment-13449320
 ] 

Sandy Ryza commented on HADOOP-8431:


I reproduced it as well.  A usage() statement is printed in addition to the 
error - is the change to be made just removing the Invalid arguments: Target 
path not specified printed to System.err?  Or the ERROR log statement as well?

 Running distcp wo args throws IllegalArgumentException
 --

 Key: HADOOP-8431
 URL: https://issues.apache.org/jira/browse/HADOOP-8431
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: newbie

 Running distcp w/o args results in the following:
 {noformat}
 hadoop-3.0.0-SNAPSHOT $ ./bin/hadoop distcp
 12/05/23 18:49:04 ERROR tools.DistCp: Invalid arguments: 
 java.lang.IllegalArgumentException: Target path not specified
   at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:86)
   at org.apache.hadoop.tools.DistCp.run(DistCp.java:102)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.tools.DistCp.main(DistCp.java:368)
 Invalid arguments: Target path not specified
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-09-05 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8733:
---

Attachment: HADOOP-8733-scripts.2.patch

Thanks for clarifying! Attaching updated patch.

 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.2.patch, HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8431) Running distcp wo args throws IllegalArgumentException

2012-09-05 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449384#comment-13449384
 ] 

Eli Collins commented on HADOOP-8431:
-

It should just print the usage. Eg in the following I'd remove the ERROR log, 
the  IllegalArgumentException backtrace and the Invalid arguments log.

{noformat}
$ ./bin/hadoop distcp
12/09/05 20:38:26 ERROR tools.DistCp: Invalid arguments: 
java.lang.IllegalArgumentException: Target path not specified
at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:86)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:102)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:368)
Invalid arguments: Target path not specified
usage: distcp OPTIONS [source_path...] target_path
  OPTIONS
 -async Should distcp execution be blocking
 -atomicCommit all changes or none
 -bandwidth arg   Specify bandwidth per map in MB
 -deleteDelete from target, files missing in source
 -f arg   List of files that need to be copied
 -filelimit arg   (Deprecated!) Limit number of files copied to = n
 -i Ignore failures during copy
 -log arg Folder on DFS where distcp execution logs are
saved
 -m arg   Max number of concurrent maps to use for copy
 -mapredSslConf arg   Configuration for ssl config file, to use with
hftps://
 -overwrite Choose to overwrite target files unconditionally,
even if they exist.
 -p arg   preserve status (rbugp)(replication, block-size,
user, group, permission)
 -sizelimit arg   (Deprecated!) Limit number of files copied to = n
bytes
 -skipcrccheck  Whether to skip CRC checks between source and
target paths.
 -strategy argCopy strategy to use. Default is dividing work
based on file sizes
 -tmp arg Intermediate work path to be used for atomic
commit
 -updateUpdate target, copying only missingfiles or
directories
{noformat}


 Running distcp wo args throws IllegalArgumentException
 --

 Key: HADOOP-8431
 URL: https://issues.apache.org/jira/browse/HADOOP-8431
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
  Labels: newbie

 Running distcp w/o args results in the following:
 {noformat}
 hadoop-3.0.0-SNAPSHOT $ ./bin/hadoop distcp
 12/05/23 18:49:04 ERROR tools.DistCp: Invalid arguments: 
 java.lang.IllegalArgumentException: Target path not specified
   at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:86)
   at org.apache.hadoop.tools.DistCp.run(DistCp.java:102)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.tools.DistCp.main(DistCp.java:368)
 Invalid arguments: Target path not specified
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-05 Thread giovanni delussu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449436#comment-13449436
 ] 

giovanni delussu commented on HADOOP-8767:
--

About the format: I cloned with git the trunk repository but the two files I'm 
proposing to change are different. So this is not a patch for the trunk. This 
is only a patch for the  hadoop-1.0.3.tar.gz version. What should I do? Create 
a git with hadoop-1.0.3.tar.gz and then after the change publish the git diff?

about the comments: 1 is already the way I modified hadoop-config.sh
2 I thought in slaves.sh the sourcing of hadoop-env.sh 
could serve to other stuff but if this is not the case I agree that we should 
remove it.

 secondary namenode on slave machines
 

 Key: HADOOP-8767
 URL: https://issues.apache.org/jira/browse/HADOOP-8767
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
Reporter: giovanni delussu
Priority: Minor
 Fix For: 1.0.3

 Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
 patch_slaves.sh_hadoop-1.0.3_fromtar.patch


 when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
 starting (with start-dfs.sh) creates secondary namenodes on all the machines 
 in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira