[jira] [Commented] (HDFS-4291) edit log unit tests leave stray test_edit_log_file around

2012-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527020#comment-13527020
 ] 

Hadoop QA commented on HDFS-4291:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12559996/HDFS-4291.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3624//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3624//console

This message is automatically generated.

> edit log unit tests leave stray test_edit_log_file around
> -
>
> Key: HDFS-4291
> URL: https://issues.apache.org/jira/browse/HDFS-4291
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4291.001.patch
>
>
> Some of the edit log tests leave a stray {{test_edit_log_file}} around.  
> These should be put in the test directory, and cleaned up after the tests 
> succeed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4291) edit log unit tests leave stray test_edit_log_file around

2012-12-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526986#comment-13526986
 ] 

Todd Lipcon commented on HDFS-4291:
---

+1 pending Jenkins

> edit log unit tests leave stray test_edit_log_file around
> -
>
> Key: HDFS-4291
> URL: https://issues.apache.org/jira/browse/HDFS-4291
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4291.001.patch
>
>
> Some of the edit log tests leave a stray {{test_edit_log_file}} around.  
> These should be put in the test directory, and cleaned up after the tests 
> succeed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4274) BlockPoolSliceScanner does not close verification log during shutdown

2012-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526959#comment-13526959
 ] 

Hadoop QA commented on HDFS-4274:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12559982/HDFS-4274.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3623//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3623//console

This message is automatically generated.

> BlockPoolSliceScanner does not close verification log during shutdown
> -
>
> Key: HDFS-4274
> URL: https://issues.apache.org/jira/browse/HDFS-4274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4274.1.patch
>
>
> {{BlockPoolSliceScanner}} holds open a handle to a verification log.  This 
> file is not getting closed during process shutdown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4291) edit log unit tests leave stray test_edit_log_file around

2012-12-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4291:
---

Status: Patch Available  (was: Open)

> edit log unit tests leave stray test_edit_log_file around
> -
>
> Key: HDFS-4291
> URL: https://issues.apache.org/jira/browse/HDFS-4291
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4291.001.patch
>
>
> Some of the edit log tests leave a stray {{test_edit_log_file}} around.  
> These should be put in the test directory, and cleaned up after the tests 
> succeed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4291) edit log unit tests leave stray test_edit_log_file around

2012-12-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4291:
---

Attachment: HDFS-4291.001.patch

> edit log unit tests leave stray test_edit_log_file around
> -
>
> Key: HDFS-4291
> URL: https://issues.apache.org/jira/browse/HDFS-4291
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-4291.001.patch
>
>
> Some of the edit log tests leave a stray {{test_edit_log_file}} around.  
> These should be put in the test directory, and cleaned up after the tests 
> succeed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4279) NameNode does not initialize generic conf keys when started with -recover

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526912#comment-13526912
 ] 

Hudson commented on HDFS-4279:
--

Integrated in Hadoop-trunk-Commit #3099 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3099/])
HDFS-4279. NameNode does not initialize generic conf keys when started with 
-recover. Contributed by Colin Patrick McCabe. (Revision 1418559)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418559
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRecovery.java


> NameNode does not initialize generic conf keys when started with -recover
> -
>
> Key: HDFS-4279
> URL: https://issues.apache.org/jira/browse/HDFS-4279
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-3236.002.patch
>
>
> This means that configurations that scope the location of the 
> name/edits/shared edits dirs by nameserice or namenode won't work with `hdfs 
> namenode -recover`

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4291) edit log unit tests leave stray test_edit_log_file around

2012-12-07 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4291:
--

 Summary: edit log unit tests leave stray test_edit_log_file around
 Key: HDFS-4291
 URL: https://issues.apache.org/jira/browse/HDFS-4291
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Some of the edit log tests leave a stray {{test_edit_log_file}} around.  These 
should be put in the test directory, and cleaned up after the tests succeed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4279) NameNode does not initialize generic conf keys when started with -recover

2012-12-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4279:
-

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks a lot for the patch and the testing, Colin. I've just committed this to 
trunk and branch-2.

> NameNode does not initialize generic conf keys when started with -recover
> -
>
> Key: HDFS-4279
> URL: https://issues.apache.org/jira/browse/HDFS-4279
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-3236.002.patch
>
>
> This means that configurations that scope the location of the 
> name/edits/shared edits dirs by nameserice or namenode won't work with `hdfs 
> namenode -recover`

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4290) Expose an event listener interface in DFSOutputStreams for block write pipeline status changes

2012-12-07 Thread Harsh J (JIRA)
Harsh J created HDFS-4290:
-

 Summary: Expose an event listener interface in DFSOutputStreams 
for block write pipeline status changes
 Key: HDFS-4290
 URL: https://issues.apache.org/jira/browse/HDFS-4290
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Priority: Minor


I've noticed HBase periodically polls the current status of block replicas for 
its HLog files via the API presented by HDFS-826.

It would perhaps be better for such clients if they could register a listener 
instead. The listener(s) can be sent an event in case things change in the last 
open block (due to DN fall but no replacement found, etc. cases). This would 
avoid having a periodic, parallel looped check in such clients and be more 
efficient.

Just a thought :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4279) NameNode does not initialize generic conf keys when started with -recover

2012-12-07 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526885#comment-13526885
 ] 

Colin Patrick McCabe commented on HDFS-4279:


To test this manually, I set up a cluster with NFS-based HA, corrupted the edit 
log, and verified that I could run {{\-recover}} as expected.

> NameNode does not initialize generic conf keys when started with -recover
> -
>
> Key: HDFS-4279
> URL: https://issues.apache.org/jira/browse/HDFS-4279
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3236.002.patch
>
>
> This means that configurations that scope the location of the 
> name/edits/shared edits dirs by nameserice or namenode won't work with `hdfs 
> namenode -recover`

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4274) BlockPoolSliceScanner does not close verification log during shutdown

2012-12-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4274:


Status: Patch Available  (was: Open)

> BlockPoolSliceScanner does not close verification log during shutdown
> -
>
> Key: HDFS-4274
> URL: https://issues.apache.org/jira/browse/HDFS-4274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4274.1.patch
>
>
> {{BlockPoolSliceScanner}} holds open a handle to a verification log.  This 
> file is not getting closed during process shutdown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4274) BlockPoolSliceScanner does not close verification log during shutdown

2012-12-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4274:


 Target Version/s: 3.0.0, trunk-win  (was: trunk-win)
Affects Version/s: 3.0.0

Adding 3.0.0 to Affects Version/s and Target Version/s.  This patch can commit 
to trunk and then merge to branch-trunk-win.

> BlockPoolSliceScanner does not close verification log during shutdown
> -
>
> Key: HDFS-4274
> URL: https://issues.apache.org/jira/browse/HDFS-4274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4274.1.patch
>
>
> {{BlockPoolSliceScanner}} holds open a handle to a verification log.  This 
> file is not getting closed during process shutdown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4274) BlockPoolSliceScanner does not close verification log during shutdown

2012-12-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4274:


Attachment: HDFS-4274.1.patch

The attached patch changes {{DataBlockScanner#run}} so that during shutdown, it 
iterates through every {{BlockPoolSliceScanner}} and calls a new 
{{BlockPoolSliceScanner#shutdown}} method, which closes the verification log.

Jenkins will give this patch -1 for no new tests, but this patch is going to 
fix more than 100 test suites when running on Windows.

> BlockPoolSliceScanner does not close verification log during shutdown
> -
>
> Key: HDFS-4274
> URL: https://issues.apache.org/jira/browse/HDFS-4274
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4274.1.patch
>
>
> {{BlockPoolSliceScanner}} holds open a handle to a verification log.  This 
> file is not getting closed during process shutdown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4275) MiniDFSCluster-based tests fail on Windows due to failure to delete test name node directory

2012-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526844#comment-13526844
 ] 

Hadoop QA commented on HDFS-4275:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12559950/HDFS-4275.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3622//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3622//console

This message is automatically generated.

> MiniDFSCluster-based tests fail on Windows due to failure to delete test name 
> node directory
> 
>
> Key: HDFS-4275
> URL: https://issues.apache.org/jira/browse/HDFS-4275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4275.1.patch
>
>
> Multiple HDFS test suites fail on Windows during initialization of 
> {{MiniDFSCluster}} due to "Could not fully delete" the name testing data 
> directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4279) NameNode does not initialize generic conf keys when started with -recover

2012-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526825#comment-13526825
 ] 

Hadoop QA commented on HDFS-4279:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12556220/HDFS-3236.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3621//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3621//console

This message is automatically generated.

> NameNode does not initialize generic conf keys when started with -recover
> -
>
> Key: HDFS-4279
> URL: https://issues.apache.org/jira/browse/HDFS-4279
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3236.002.patch
>
>
> This means that configurations that scope the location of the 
> name/edits/shared edits dirs by nameserice or namenode won't work with `hdfs 
> namenode -recover`

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2012-12-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526824#comment-13526824
 ] 

Aaron T. Myers commented on HDFS-4261:
--

Thanks a lot for the investigation, Chris. I agree that tracking this specific 
Windows issue warrants another JIRA.

Nicholas: does Junping's latest patch look OK to you? If so, I'll go ahead and 
commit it.

> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4261) TestBalancerWithNodeGroup times out

2012-12-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526806#comment-13526806
 ] 

Chris Nauroth commented on HDFS-4261:
-

I reviewed the Windows failure more closely and found this:

{code}
java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: replica.getBytesOnDisk() !=
 block.getNumBytes(), block=BP-TEST:blk_1000_2000, replica=ReplicaUnderRecovery,
 blk_1000_2000, RUR
{code}

That came from this check in {{FsDatasetImpl#updateReplicaUnderRecovery}}:

{code}
//check replica's byte on disk
if (replica.getBytesOnDisk() != oldBlock.getNumBytes()) {
  throw new IOException("THIS IS NOT SUPPOSED TO HAPPEN:"
  + " replica.getBytesOnDisk() != block.getNumBytes(), block="
  + oldBlock + ", replica=" + replica);
}
{code}

This is causing the current balancer iteration to move 0 bytes.  Then, the new 
logic returns {{NO_MOVE_PROGRESS}} after exceeding the maximum iterations.

This looks to be an unrelated Windows-specific issue, so I have filed a 
separate jira to track it: HDFS-4289.


> TestBalancerWithNodeGroup times out
> ---
>
> Key: HDFS-4261
> URL: https://issues.apache.org/jira/browse/HDFS-4261
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer
>Affects Versions: 1.0.4, 1.1.1, 2.0.2-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Junping Du
> Attachments: HDFS-4261.patch, HDFS-4261-v2.patch, HDFS-4261-v3.patch, 
> HDFS-4261-v4.patch
>
>
> When I manually ran TestBalancerWithNodeGroup, it always timed out in my 
> machine.  Looking at the Jerkins report [build 
> #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/],
>  TestBalancerWithNodeGroup somehow was skipped so that the problem was not 
> detected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4289) FsDatasetImpl#updateReplicaUnderRecovery throws errors validating replica byte count on Windows

2012-12-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526804#comment-13526804
 ] 

Chris Nauroth commented on HDFS-4289:
-

This was discovered while researching HDFS-4261.  All subsequent notes assume 
that HDFS-4261 has been fixed first.

Running {{TestBalancerWithNodeGroup#testBalancerWithRackLocality}} shows this 
in the output:

{code}
java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: replica.getBytesOnDisk() !=
 block.getNumBytes(), block=BP-TEST:blk_1000_2000, replica=ReplicaUnderRecovery,
 blk_1000_2000, RUR
{code}

That log message came from this check in 
{{FsDatasetImpl#updateReplicaUnderRecovery}}:

{code}
//check replica's byte on disk
if (replica.getBytesOnDisk() != oldBlock.getNumBytes()) {
  throw new IOException("THIS IS NOT SUPPOSED TO HAPPEN:"
  + " replica.getBytesOnDisk() != block.getNumBytes(), block="
  + oldBlock + ", replica=" + replica);
}
{code}

This is causing the current {{Balancer}} iteration to move 0 bytes.  Then, the 
{{Balancer}} logic returns {{NO_MOVE_PROGRESS}} after exceeding the maximum 
iterations.


> FsDatasetImpl#updateReplicaUnderRecovery throws errors validating replica 
> byte count on Windows
> ---
>
> Key: HDFS-4289
> URL: https://issues.apache.org/jira/browse/HDFS-4289
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> {{FsDatasetImpl#updateReplicaUnderRecovery}} throws errors validating replica 
> byte count on Windows.  This can be seen by running 
> {{TestBalancerWithNodeGroup#testBalancerWithRackLocality}}, which fails on 
> Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4289) FsDatasetImpl#updateReplicaUnderRecovery throws errors validating replica byte count on Windows

2012-12-07 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4289:
---

 Summary: FsDatasetImpl#updateReplicaUnderRecovery throws errors 
validating replica byte count on Windows
 Key: HDFS-4289
 URL: https://issues.apache.org/jira/browse/HDFS-4289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


{{FsDatasetImpl#updateReplicaUnderRecovery}} throws errors validating replica 
byte count on Windows.  This can be seen by running 
{{TestBalancerWithNodeGroup#testBalancerWithRackLocality}}, which fails on 
Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4205) fsck fails with symlinks

2012-12-07 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526796#comment-13526796
 ] 

Eli Collins commented on HDFS-4205:
---

@Nikolai,

Like stat vs lstat, FileContext#getFileLinkStatus returns the FileSatus of the 
link, getFileStatus resolves all the symlinks in the path and returns the 
FileStatus of the resolved path.   

> fsck fails with symlinks
> 
>
> Key: HDFS-4205
> URL: https://issues.apache.org/jira/browse/HDFS-4205
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>
> I created a symlink using
> {code}
> ...
> FileContext fc = FileContext.getFileContext(dst.fs.getUri());
> for (PathData src : srcs) {
>   fc.createSymlink(src.path, dst.path, false);
> }
> {code}
> After doing this to create a symlink {{/foo/too.txt -> /foo/hello.txt}}, I 
> tried to {{hdfs fsck}} and got the following:
> {code}
> [adi@host01 ~]$ hdfs fsck /
> Connecting to namenode via http://host01:21070
> FSCK started by adi (auth:SIMPLE) from /172.29.122.91 for path / at Fri Nov 
> 16 15:59:18 PST 2012
> FSCK ended at Fri Nov 16 15:59:18 PST 2012 in 3 milliseconds
> hdfs://host01:21020/foo/hello.txt
> Fsck on path '/' FAILED
> {code}
> It's very surprising that an unprivileged user can run code which so easily 
> causes a fundamental administration tool to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3598) WebHDFS: support file concat

2012-12-07 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526789#comment-13526789
 ] 

Eli Collins commented on HDFS-3598:
---

@Harsh, see the discussion on HDFS-950. I think Owen makes a good point with 
regard to not wanting to expose/encourage the API due to it's limitations.

@Nicholas, what's the motivation for exposing it via WebHDFS, just API parity 
with DistributedFileSystem or something specific you're trying to accomplish?



> WebHDFS: support file concat
> 
>
> Key: HDFS-3598
> URL: https://issues.apache.org/jira/browse/HDFS-3598
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>
> In trunk and branch-2, DistributedFileSystem has a new concat(Path trg, Path 
> [] psrcs) method.  WebHDFS should support it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4275) MiniDFSCluster-based tests fail on Windows due to failure to delete test name node directory

2012-12-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4275:


 Target Version/s: 3.0.0, trunk-win  (was: trunk-win)
Affects Version/s: 3.0.0

Adding 3.0.0 to Affects Version/s and Target Version/s.  This patch can commit 
to trunk and then merge to branch-trunk-win.

> MiniDFSCluster-based tests fail on Windows due to failure to delete test name 
> node directory
> 
>
> Key: HDFS-4275
> URL: https://issues.apache.org/jira/browse/HDFS-4275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4275.1.patch
>
>
> Multiple HDFS test suites fail on Windows during initialization of 
> {{MiniDFSCluster}} due to "Could not fully delete" the name testing data 
> directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4275) MiniDFSCluster-based tests fail on Windows due to failure to delete test name node directory

2012-12-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4275:


Status: Patch Available  (was: Open)

> MiniDFSCluster-based tests fail on Windows due to failure to delete test name 
> node directory
> 
>
> Key: HDFS-4275
> URL: https://issues.apache.org/jira/browse/HDFS-4275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4275.1.patch
>
>
> Multiple HDFS test suites fail on Windows during initialization of 
> {{MiniDFSCluster}} due to "Could not fully delete" the name testing data 
> directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4267) Remove pre-append related dead code in branch-1

2012-12-07 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526769#comment-13526769
 ] 

Sanjay Radia commented on HDFS-4267:


+1

> Remove pre-append related dead code in branch-1
> ---
>
> Key: HDFS-4267
> URL: https://issues.apache.org/jira/browse/HDFS-4267
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 1.0.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Minor
> Attachments: HDFS-4267.patch
>
>
> Some checks in DFSClient is no longer necessary since append code has been 
> merged to branch-1. This is a trivial cleanup of the code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4275) MiniDFSCluster-based tests fail on Windows due to failure to delete test name node directory

2012-12-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4275:


Attachment: HDFS-4275.1.patch

I am attaching a patch to resolve the remaining issues after resolution of 
HDFS-4261.

The only remaining issue is that 
{{TestBlockRecovery#testRaceBetweenReplicaRecoveryAndFinalizeBlock}} calls 
{{tearDown}} twice: once explicitly at the start of the test and once 
implicitly by JUnit after the test finishes.  On Windows, the double tear-down 
is problematic.  The second delete fails, causing the test to fail.  I've added 
a flag to track whether or not {{tearDown}} has already executed for a test and 
prevent double execution.

> MiniDFSCluster-based tests fail on Windows due to failure to delete test name 
> node directory
> 
>
> Key: HDFS-4275
> URL: https://issues.apache.org/jira/browse/HDFS-4275
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4275.1.patch
>
>
> Multiple HDFS test suites fail on Windows during initialization of 
> {{MiniDFSCluster}} due to "Could not fully delete" the name testing data 
> directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4288) NN accepts incremental BR as IBR in safemode

2012-12-07 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-4288:
-

 Summary: NN accepts incremental BR as IBR in safemode
 Key: HDFS-4288
 URL: https://issues.apache.org/jira/browse/HDFS-4288
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical


If a DN is ready to send an incremental BR and the NN goes down, the DN will 
repeatedly try to reconnect.  The NN will then process the DN's incremental BR 
as an initial BR.  The NN now thinks the DN has only a few blocks, and will 
ignore all subsequent BRs from that DN until out of safemode -- which it may 
never do because of all the "missing" blocks on the affected DNs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4279) NameNode does not initialize generic conf keys when started with -recover

2012-12-07 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4279:
---

Status: Patch Available  (was: Open)

> NameNode does not initialize generic conf keys when started with -recover
> -
>
> Key: HDFS-4279
> URL: https://issues.apache.org/jira/browse/HDFS-4279
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3236.002.patch
>
>
> This means that configurations that scope the location of the 
> name/edits/shared edits dirs by nameserice or namenode won't work with `hdfs 
> namenode -recover`

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2264) NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo annotation

2012-12-07 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526703#comment-13526703
 ] 

Aaron T. Myers commented on HDFS-2264:
--

The test failure was unrelated and was fixed by HDFS-4282. No tests are 
included in this patch since Kerberos is required to test this stuff out.

Daryn, how does this patch look now?

> NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo 
> annotation
> ---
>
> Key: HDFS-2264
> URL: https://issues.apache.org/jira/browse/HDFS-2264
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2264.patch, HDFS-2264.patch, HDFS-2264.r1.diff
>
>
> The {{@KerberosInfo}} annotation specifies the expected server and client 
> principals for a given protocol in order to look up the correct principal 
> name from the config. The {{NamenodeProtocol}} has the wrong value for the 
> client config key. This wasn't noticed because most setups actually use the 
> same *value* for for both the NN and 2NN principals ({{hdfs/_HOST@REALM}}), 
> in which the {{_HOST}} part gets replaced at run-time. This bug therefore 
> only manifests itself on secure setups which explicitly specify the NN and 
> 2NN principals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2264) NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo annotation

2012-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526680#comment-13526680
 ] 

Hadoop QA commented on HDFS-2264:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12559903/HDFS-2264.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3620//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3620//console

This message is automatically generated.

> NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo 
> annotation
> ---
>
> Key: HDFS-2264
> URL: https://issues.apache.org/jira/browse/HDFS-2264
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2264.patch, HDFS-2264.patch, HDFS-2264.r1.diff
>
>
> The {{@KerberosInfo}} annotation specifies the expected server and client 
> principals for a given protocol in order to look up the correct principal 
> name from the config. The {{NamenodeProtocol}} has the wrong value for the 
> client config key. This wasn't noticed because most setups actually use the 
> same *value* for for both the NN and 2NN principals ({{hdfs/_HOST@REALM}}), 
> in which the {{_HOST}} part gets replaced at run-time. This bug therefore 
> only manifests itself on secure setups which explicitly specify the NN and 
> 2NN principals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4287) HTTPFS tests fail on Windows

2012-12-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526668#comment-13526668
 ] 

Chris Nauroth commented on HDFS-4287:
-

Alejandro, thank you for the notes.  I was not yet aware of this part, so it's 
very helpful.

> HTTPFS tests fail on Windows
> 
>
> Key: HDFS-4287
> URL: https://issues.apache.org/jira/browse/HDFS-4287
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> The HTTPFS tests have some platform-specific assumptions that cause the tests 
> to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4236) Regression: HDFS-4171 puts artificial limit on username length

2012-12-07 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526662#comment-13526662
 ] 

Alejandro Abdelnur commented on HDFS-4236:
--

Thanks for following up on this Suresh.

> Regression: HDFS-4171 puts artificial limit on username length
> --
>
> Key: HDFS-4236
> URL: https://issues.apache.org/jira/browse/HDFS-4236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Allen Wittenauer
>Assignee: Alejandro Abdelnur
>Priority: Blocker
>  Labels: regression
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4171.patch
>
>
> HDFS-4171 made the invalid assumption that there is a common limit on user 
> names at the UNIX level.  Almost all modern systems, when running under a 
> 64-bit kernel, use a pointer instead of a char array in the passwd struct.  
> This makes usernames essentially unlimited.
> Additionally, IIRC, the only places where HDFS and the OS interact where the 
> username is touched is during group lookup.
> This limit is artificial and should be removed.  There is a very high risk 
> that we will break users, especially service accounts used for automated 
> processes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4287) HTTPFS tests fail on Windows

2012-12-07 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526659#comment-13526659
 ] 

Alejandro Abdelnur commented on HDFS-4287:
--

Chris, you are correct. This should be easy to address by detecting if the OS 
is Windows and addressing using the correct logic to handle the drive.

The other thing you'll have to take care to run HTTPFS in Windows are the 
startup scripts. They do some ENV setting and they delegate to the Tomcat 
startup scripts. I'd assume doing the same thing in BAT/CMD and calling the 
corresponding Tomcat windows startup script will do.

Hope this helps.


> HTTPFS tests fail on Windows
> 
>
> Key: HDFS-4287
> URL: https://issues.apache.org/jira/browse/HDFS-4287
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> The HTTPFS tests have some platform-specific assumptions that cause the tests 
> to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4260) Fix HDFS tests to set test dir to a valid HDFS path as opposed to the local build path

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526634#comment-13526634
 ] 

Hudson commented on HDFS-4260:
--

Integrated in Hadoop-trunk-Commit #3097 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3097/])
HDFS-4260 Fix HDFS tests to set test dir to a valid HDFS path as opposed to 
the local build path (Chris Nauroth via Sanjay) (Revision 1418424)

 Result = SUCCESS
sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418424
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsCreateMkdir.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSetUMask.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestFcHdfsSymlink.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestHDFSFileContextMainOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemAtHdfsRoot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemHdfs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsAtHdfsRoot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsHdfs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestFSMainOperationsWebHdfs.java


> Fix HDFS tests to set test dir to a valid HDFS path as opposed to the local 
> build path
> --
>
> Key: HDFS-4260
> URL: https://issues.apache.org/jira/browse/HDFS-4260
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4260-branch-trunk-win.1.patch, 
> HDFS-4260-branch-trunk-win.2.patch, HDFS-4260-branch-trunk-win.3.patch
>
>
> Multiple HDFS test suites fail early during initialization on Windows because 
> of inclusion of the drive spec in the test root path.  The ':' gets rejected 
> as an invalid character by the logic of isValidName.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4287) HTTPFS tests fail on Windows

2012-12-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526621#comment-13526621
 ] 

Chris Nauroth commented on HDFS-4287:
-

One problem is that {{TestDirHelper}} enforces some platform-specific 
validation on TEST_DIR_ROOT.  On Windows, TEST_DIR_ROOT.startsWith("/") is 
likely to be false because of the drive spec.

{code}
  static {
try {
  TEST_DIR_ROOT = System.getProperty(TEST_DIR_PROP, new 
File("target").getAbsolutePath());
  if (!TEST_DIR_ROOT.startsWith("/")) {
System.err.println(MessageFormat.format("System property [{0}]=[{1}] 
must be set to an absolute path",
TEST_DIR_PROP, TEST_DIR_ROOT));
System.exit(-1);
  } else if (TEST_DIR_ROOT.length() < 4) {
System.err.println(MessageFormat.format("System property [{0}]=[{1}] 
must be at least 4 chars",
TEST_DIR_PROP, TEST_DIR_ROOT));
System.exit(-1);
  }
{code}

That part is easily fixable by switching to {{Path#isUriPathAbsolute}}.  After 
fixing that, there are additional problems with path handling:

{code}
Running org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem
Tests run: 30, Failures: 0, Errors: 30, Skipped: 0, Time elapsed: 23.219 sec 
<<< FAILURE!
testOperation[0](org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem)
  Time elapsed: 21656 sec  <<< ERROR!
java.lang.RuntimeException: java.lang.IllegalArgumentException: Pathname 
/user/cnauroth/C:/hd2/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/testOperation-0
 from 
./C:/hd2/hadoop-hdfs-project/hadoop-hdfs-httpfs/target/test-dir/testOperation-0 
is not a valid DFS filename.
{code}


> HTTPFS tests fail on Windows
> 
>
> Key: HDFS-4287
> URL: https://issues.apache.org/jira/browse/HDFS-4287
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> The HTTPFS tests have some platform-specific assumptions that cause the tests 
> to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4260) Fix HDFS tests to set test dir to a valid HDFS path as opposed to the local build path

2012-12-07 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HDFS-4260:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix HDFS tests to set test dir to a valid HDFS path as opposed to the local 
> build path
> --
>
> Key: HDFS-4260
> URL: https://issues.apache.org/jira/browse/HDFS-4260
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4260-branch-trunk-win.1.patch, 
> HDFS-4260-branch-trunk-win.2.patch, HDFS-4260-branch-trunk-win.3.patch
>
>
> Multiple HDFS test suites fail early during initialization on Windows because 
> of inclusion of the drive spec in the test root path.  The ':' gets rejected 
> as an invalid character by the logic of isValidName.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4287) HTTPFS tests fail on Windows

2012-12-07 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4287:
---

 Summary: HTTPFS tests fail on Windows
 Key: HDFS-4287
 URL: https://issues.apache.org/jira/browse/HDFS-4287
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


The HTTPFS tests have some platform-specific assumptions that cause the tests 
to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4260) Fix HDFS tests to set test dir to a valid HDFS path as opposed to the local build path

2012-12-07 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HDFS-4260:
---

Summary: Fix HDFS tests to set test dir to a valid HDFS path as opposed to 
the local build path  (was: incorrect inclusion of drive spec in test root path 
causes multiple HDFS test failures on Windows)

> Fix HDFS tests to set test dir to a valid HDFS path as opposed to the local 
> build path
> --
>
> Key: HDFS-4260
> URL: https://issues.apache.org/jira/browse/HDFS-4260
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4260-branch-trunk-win.1.patch, 
> HDFS-4260-branch-trunk-win.2.patch, HDFS-4260-branch-trunk-win.3.patch
>
>
> Multiple HDFS test suites fail early during initialization on Windows because 
> of inclusion of the drive spec in the test root path.  The ':' gets rejected 
> as an invalid character by the logic of isValidName.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4260) incorrect inclusion of drive spec in test root path causes multiple HDFS test failures on Windows

2012-12-07 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526586#comment-13526586
 ] 

Sanjay Radia commented on HDFS-4260:


+1; will commit this shortly.

> incorrect inclusion of drive spec in test root path causes multiple HDFS test 
> failures on Windows
> -
>
> Key: HDFS-4260
> URL: https://issues.apache.org/jira/browse/HDFS-4260
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, trunk-win
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4260-branch-trunk-win.1.patch, 
> HDFS-4260-branch-trunk-win.2.patch, HDFS-4260-branch-trunk-win.3.patch
>
>
> Multiple HDFS test suites fail early during initialization on Windows because 
> of inclusion of the drive spec in the test root path.  The ':' gets rejected 
> as an invalid character by the logic of isValidName.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2264) NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo annotation

2012-12-07 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2264:
-

Attachment: HDFS-2264.patch

Thanks a lot for taking a look, Daryn. Though I'm pretty sure the HA changes 
would be no-ops, it seems fine to me to break out that change. Here's a patch 
which removes those.

> NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo 
> annotation
> ---
>
> Key: HDFS-2264
> URL: https://issues.apache.org/jira/browse/HDFS-2264
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2264.patch, HDFS-2264.patch, HDFS-2264.r1.diff
>
>
> The {{@KerberosInfo}} annotation specifies the expected server and client 
> principals for a given protocol in order to look up the correct principal 
> name from the config. The {{NamenodeProtocol}} has the wrong value for the 
> client config key. This wasn't noticed because most setups actually use the 
> same *value* for for both the NN and 2NN principals ({{hdfs/_HOST@REALM}}), 
> in which the {{_HOST}} part gets replaced at run-time. This bug therefore 
> only manifests itself on secure setups which explicitly specify the NN and 
> 2NN principals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4236) Regression: HDFS-4171 puts artificial limit on username length

2012-12-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4236:
--

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the change to trunk and branch-2. Thank you Allen for noticing and 
reporting the issue. Thank you Alejandro for the patch.

> Regression: HDFS-4171 puts artificial limit on username length
> --
>
> Key: HDFS-4236
> URL: https://issues.apache.org/jira/browse/HDFS-4236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Allen Wittenauer
>Assignee: Alejandro Abdelnur
>Priority: Blocker
>  Labels: regression
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4171.patch
>
>
> HDFS-4171 made the invalid assumption that there is a common limit on user 
> names at the UNIX level.  Almost all modern systems, when running under a 
> 64-bit kernel, use a pointer instead of a char array in the passwd struct.  
> This makes usernames essentially unlimited.
> Additionally, IIRC, the only places where HDFS and the OS interact where the 
> username is touched is during group lookup.
> This limit is artificial and should be removed.  There is a very high risk 
> that we will break users, especially service accounts used for automated 
> processes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4171) WebHDFS and HttpFs should accept only valid Unix user names

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526461#comment-13526461
 ] 

Hudson commented on HDFS-4171:
--

Integrated in Hadoop-trunk-Commit #3096 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3096/])
HDFS-4236. Remove artificial limit on username length introduced in 
HDFS-4171. Contributed by Alejandro Abdelnur. (Revision 1418356)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418356
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/wsrs/TestUserProvider.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java


> WebHDFS and HttpFs should accept only valid Unix user names
> ---
>
> Key: HDFS-4171
> URL: https://issues.apache.org/jira/browse/HDFS-4171
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Harsh J
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4171.patch, HDFS-4171.patch, HDFS-4171.patch, 
> HDFS-4171.patch
>
>
> HttpFs tries to use UserProfile.USER_PATTERN to match all usernames before a 
> doAs impersonation function. This regex is too strict for most usernames, as 
> it disallows any special character at all. We should relax it more or ditch 
> needing to match things there.
> WebHDFS currently has no such limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4236) Regression: HDFS-4171 puts artificial limit on username length

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526462#comment-13526462
 ] 

Hudson commented on HDFS-4236:
--

Integrated in Hadoop-trunk-Commit #3096 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3096/])
HDFS-4236. Remove artificial limit on username length introduced in 
HDFS-4171. Contributed by Alejandro Abdelnur. (Revision 1418356)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418356
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/wsrs/TestUserProvider.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java


> Regression: HDFS-4171 puts artificial limit on username length
> --
>
> Key: HDFS-4236
> URL: https://issues.apache.org/jira/browse/HDFS-4236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Allen Wittenauer
>Assignee: Alejandro Abdelnur
>Priority: Blocker
>  Labels: regression
> Attachments: HDFS-4171.patch
>
>
> HDFS-4171 made the invalid assumption that there is a common limit on user 
> names at the UNIX level.  Almost all modern systems, when running under a 
> 64-bit kernel, use a pointer instead of a char array in the passwd struct.  
> This makes usernames essentially unlimited.
> Additionally, IIRC, the only places where HDFS and the OS interact where the 
> username is touched is during group lookup.
> This limit is artificial and should be removed.  There is a very high risk 
> that we will break users, especially service accounts used for automated 
> processes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2264) NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo annotation

2012-12-07 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526426#comment-13526426
 ] 

Daryn Sharp commented on HDFS-2264:
---

I think the change generally looks ok if the test failure is unrelated, but I'd 
suggest splitting out the HA changes or redefining the jira.

> NamenodeProtocol has the wrong value for clientPrincipal in KerberosInfo 
> annotation
> ---
>
> Key: HDFS-2264
> URL: https://issues.apache.org/jira/browse/HDFS-2264
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.2-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-2264.patch, HDFS-2264.r1.diff
>
>
> The {{@KerberosInfo}} annotation specifies the expected server and client 
> principals for a given protocol in order to look up the correct principal 
> name from the config. The {{NamenodeProtocol}} has the wrong value for the 
> client config key. This wasn't noticed because most setups actually use the 
> same *value* for for both the NN and 2NN principals ({{hdfs/_HOST@REALM}}), 
> in which the {{_HOST}} part gets replaced at run-time. This bug therefore 
> only manifests itself on secure setups which explicitly specify the NN and 
> 2NN principals.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4282) TestEditLog.testFuzzSequences FAILED in all pre-commit test

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526402#comment-13526402
 ] 

Hudson commented on HDFS-4282:
--

Integrated in Hadoop-Mapreduce-trunk #1278 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1278/])
HDFS-4282. TestEditLog.testFuzzSequences FAILED in all pre-commit test. 
Contributed by Todd Lipcon. (Revision 1418214)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418214
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/UTF8.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestUTF8.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java


> TestEditLog.testFuzzSequences FAILED in all pre-commit test
> ---
>
> Key: HDFS-4282
> URL: https://issues.apache.org/jira/browse/HDFS-4282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Junping Du
>Assignee: Todd Lipcon
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs-4282.txt
>
>
> Caught non-IOException throwable java.lang.RuntimeException: 
> java.io.IOException: Invalid UTF8 at 9871b370d70a  at 
> org.apache.hadoop.io.UTF8.toString(UTF8.java:154)  at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$TimesOp.readFields(FSEditLogOp.java:1439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.decodeOp(FSEditLogOp.java:2399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:2290)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:175)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:72)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.validateNoCrash(TestEditLog.java:1233)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.testFuzzSequences(TestEditLog.java:1272)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:236)  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invo

[jira] [Commented] (HDFS-3680) Allow customized audit logging in HDFS FSNamesystem

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526400#comment-13526400
 ] 

Hudson commented on HDFS-3680:
--

Integrated in Hadoop-Mapreduce-trunk #1278 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1278/])
HDFS-3680. Allow customized audit logging in HDFS FSNamesystem. Contributed 
by Marcelo Vanzin. (Revision 1418114)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AuditLogger.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java


> Allow customized audit logging in HDFS FSNamesystem
> ---
>
> Key: HDFS-3680
> URL: https://issues.apache.org/jira/browse/HDFS-3680
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: accesslogger-v1.patch, accesslogger-v2.patch, 
> hdfs-3680-v10.patch, hdfs-3680-v3.patch, hdfs-3680-v4.patch, 
> hdfs-3680-v5.patch, hdfs-3680-v6.patch, hdfs-3680-v7.patch, 
> hdfs-3680-v8.patch, hdfs-3680-v9.patch
>
>
> Currently, FSNamesystem writes audit logs to a logger; that makes it easy to 
> get audit logs in some log file. But it makes it kinda tricky to store audit 
> logs in any other way (let's say a database), because it would require the 
> code to implement a log appender (and thus know what logging system is 
> actually being used underneath the façade), and parse the textual log message 
> generated by FSNamesystem.
> I'm attaching a patch that introduces a cleaner interface for this use case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3680) Allow customized audit logging in HDFS FSNamesystem

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526364#comment-13526364
 ] 

Hudson commented on HDFS-3680:
--

Integrated in Hadoop-Hdfs-trunk #1247 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1247/])
HDFS-3680. Allow customized audit logging in HDFS FSNamesystem. Contributed 
by Marcelo Vanzin. (Revision 1418114)

 Result = FAILURE
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AuditLogger.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java


> Allow customized audit logging in HDFS FSNamesystem
> ---
>
> Key: HDFS-3680
> URL: https://issues.apache.org/jira/browse/HDFS-3680
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: accesslogger-v1.patch, accesslogger-v2.patch, 
> hdfs-3680-v10.patch, hdfs-3680-v3.patch, hdfs-3680-v4.patch, 
> hdfs-3680-v5.patch, hdfs-3680-v6.patch, hdfs-3680-v7.patch, 
> hdfs-3680-v8.patch, hdfs-3680-v9.patch
>
>
> Currently, FSNamesystem writes audit logs to a logger; that makes it easy to 
> get audit logs in some log file. But it makes it kinda tricky to store audit 
> logs in any other way (let's say a database), because it would require the 
> code to implement a log appender (and thus know what logging system is 
> actually being used underneath the façade), and parse the textual log message 
> generated by FSNamesystem.
> I'm attaching a patch that introduces a cleaner interface for this use case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4282) TestEditLog.testFuzzSequences FAILED in all pre-commit test

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526366#comment-13526366
 ] 

Hudson commented on HDFS-4282:
--

Integrated in Hadoop-Hdfs-trunk #1247 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1247/])
HDFS-4282. TestEditLog.testFuzzSequences FAILED in all pre-commit test. 
Contributed by Todd Lipcon. (Revision 1418214)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418214
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/UTF8.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestUTF8.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java


> TestEditLog.testFuzzSequences FAILED in all pre-commit test
> ---
>
> Key: HDFS-4282
> URL: https://issues.apache.org/jira/browse/HDFS-4282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Junping Du
>Assignee: Todd Lipcon
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs-4282.txt
>
>
> Caught non-IOException throwable java.lang.RuntimeException: 
> java.io.IOException: Invalid UTF8 at 9871b370d70a  at 
> org.apache.hadoop.io.UTF8.toString(UTF8.java:154)  at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$TimesOp.readFields(FSEditLogOp.java:1439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.decodeOp(FSEditLogOp.java:2399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:2290)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:175)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:72)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.validateNoCrash(TestEditLog.java:1233)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.testFuzzSequences(TestEditLog.java:1272)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:236)  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(Provide

[jira] [Commented] (HDFS-4282) TestEditLog.testFuzzSequences FAILED in all pre-commit test

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526308#comment-13526308
 ] 

Hudson commented on HDFS-4282:
--

Integrated in Hadoop-Yarn-trunk #58 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/58/])
HDFS-4282. TestEditLog.testFuzzSequences FAILED in all pre-commit test. 
Contributed by Todd Lipcon. (Revision 1418214)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418214
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/UTF8.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestUTF8.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java


> TestEditLog.testFuzzSequences FAILED in all pre-commit test
> ---
>
> Key: HDFS-4282
> URL: https://issues.apache.org/jira/browse/HDFS-4282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Junping Du
>Assignee: Todd Lipcon
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs-4282.txt
>
>
> Caught non-IOException throwable java.lang.RuntimeException: 
> java.io.IOException: Invalid UTF8 at 9871b370d70a  at 
> org.apache.hadoop.io.UTF8.toString(UTF8.java:154)  at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$TimesOp.readFields(FSEditLogOp.java:1439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.decodeOp(FSEditLogOp.java:2399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:2290)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:175)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:72)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.validateNoCrash(TestEditLog.java:1233)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.testFuzzSequences(TestEditLog.java:1272)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:236)  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFac

[jira] [Commented] (HDFS-3680) Allow customized audit logging in HDFS FSNamesystem

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526306#comment-13526306
 ] 

Hudson commented on HDFS-3680:
--

Integrated in Hadoop-Yarn-trunk #58 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/58/])
HDFS-3680. Allow customized audit logging in HDFS FSNamesystem. Contributed 
by Marcelo Vanzin. (Revision 1418114)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418114
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AuditLogger.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java


> Allow customized audit logging in HDFS FSNamesystem
> ---
>
> Key: HDFS-3680
> URL: https://issues.apache.org/jira/browse/HDFS-3680
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: accesslogger-v1.patch, accesslogger-v2.patch, 
> hdfs-3680-v10.patch, hdfs-3680-v3.patch, hdfs-3680-v4.patch, 
> hdfs-3680-v5.patch, hdfs-3680-v6.patch, hdfs-3680-v7.patch, 
> hdfs-3680-v8.patch, hdfs-3680-v9.patch
>
>
> Currently, FSNamesystem writes audit logs to a logger; that makes it easy to 
> get audit logs in some log file. But it makes it kinda tricky to store audit 
> logs in any other way (let's say a database), because it would require the 
> code to implement a log appender (and thus know what logging system is 
> actually being used underneath the façade), and parse the textual log message 
> generated by FSNamesystem.
> I'm attaching a patch that introduces a cleaner interface for this use case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4286) Changes from BOOKKEEPER-203 broken capability of including bookkeeper-server jar in hidden package of BKJM

2012-12-07 Thread Ivan Kelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Kelly updated HDFS-4286:
-

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-3399

> Changes from BOOKKEEPER-203 broken capability of including bookkeeper-server 
> jar in hidden package of BKJM
> --
>
> Key: HDFS-4286
> URL: https://issues.apache.org/jira/browse/HDFS-4286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Vinay
> Fix For: 3.0.0, 2.0.3-alpha
>
>
> BOOKKEEPER-203 introduced changes to LedgerLayout to include 
> ManagerFactoryClass instead of ManagerFactoryName.
> So because of this, BKJM cannot shade the bookkeeper-server jar inside BKJM 
> jar
> LAYOUT znode created by BookieServer is not readable by the BKJM as it have 
> classes in hidden packages. (same problem vice versa)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HDFS-4286) Changes from BOOKKEEPER-203 broken capability of including bookkeeper-server jar in hidden package of BKJM

2012-12-07 Thread Ivan Kelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Kelly moved BOOKKEEPER-478 to HDFS-4286:
-

Fix Version/s: (was: 4.2.0)
   2.0.3-alpha
   3.0.0
  Key: HDFS-4286  (was: BOOKKEEPER-478)
  Project: Hadoop HDFS  (was: Bookkeeper)

> Changes from BOOKKEEPER-203 broken capability of including bookkeeper-server 
> jar in hidden package of BKJM
> --
>
> Key: HDFS-4286
> URL: https://issues.apache.org/jira/browse/HDFS-4286
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Vinay
> Fix For: 3.0.0, 2.0.3-alpha
>
>
> BOOKKEEPER-203 introduced changes to LedgerLayout to include 
> ManagerFactoryClass instead of ManagerFactoryName.
> So because of this, BKJM cannot shade the bookkeeper-server jar inside BKJM 
> jar
> LAYOUT znode created by BookieServer is not readable by the BKJM as it have 
> classes in hidden packages. (same problem vice versa)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4282) TestEditLog.testFuzzSequences FAILED in all pre-commit test

2012-12-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13526227#comment-13526227
 ] 

Hudson commented on HDFS-4282:
--

Integrated in Hadoop-trunk-Commit #3095 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3095/])
HDFS-4282. TestEditLog.testFuzzSequences FAILED in all pre-commit test. 
Contributed by Todd Lipcon. (Revision 1418214)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1418214
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/SequenceFile.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/UTF8.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestUTF8.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java


> TestEditLog.testFuzzSequences FAILED in all pre-commit test
> ---
>
> Key: HDFS-4282
> URL: https://issues.apache.org/jira/browse/HDFS-4282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Junping Du
>Assignee: Todd Lipcon
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs-4282.txt
>
>
> Caught non-IOException throwable java.lang.RuntimeException: 
> java.io.IOException: Invalid UTF8 at 9871b370d70a  at 
> org.apache.hadoop.io.UTF8.toString(UTF8.java:154)  at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$TimesOp.readFields(FSEditLogOp.java:1439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.decodeOp(FSEditLogOp.java:2399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:2290)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:175)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:72)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.validateNoCrash(TestEditLog.java:1233)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.testFuzzSequences(TestEditLog.java:1272)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:236)  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(Pro

[jira] [Updated] (HDFS-4282) TestEditLog.testFuzzSequences FAILED in all pre-commit test

2012-12-07 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-4282:
--

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the review. Committed to branch-2 and trunk

> TestEditLog.testFuzzSequences FAILED in all pre-commit test
> ---
>
> Key: HDFS-4282
> URL: https://issues.apache.org/jira/browse/HDFS-4282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Junping Du
>Assignee: Todd Lipcon
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs-4282.txt
>
>
> Caught non-IOException throwable java.lang.RuntimeException: 
> java.io.IOException: Invalid UTF8 at 9871b370d70a  at 
> org.apache.hadoop.io.UTF8.toString(UTF8.java:154)  at 
> org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.readString(FSImageSerialization.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$TimesOp.readFields(FSEditLogOp.java:1439)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.decodeOp(FSEditLogOp.java:2399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$Reader.readOp(FSEditLogOp.java:2290)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:177)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:175)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:217)
>   at 
> org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:72)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.validateNoCrash(TestEditLog.java:1233)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.testFuzzSequences(TestEditLog.java:1272)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:236)  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)  at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>   at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75) 
> Caused by: java.io.IOException: Invalid UTF8 at 9871b370d70a  at 
> org.apache.hadoop.io.UTF8.readChars(UTF8.java:277)  at 
> org.apache.hadoop.io.UTF8.toString(UTF8.java:151)  ... 39 more 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira