[jira] [Created] (HDFS-6425) reset postponedMisreplicatedBlocks and postponedMisreplicatedBlocksCount when NN becomes active

2014-05-16 Thread Ming Ma (JIRA)
Ming Ma created HDFS-6425:
-

 Summary: reset postponedMisreplicatedBlocks and 
postponedMisreplicatedBlocksCount when NN becomes active
 Key: HDFS-6425
 URL: https://issues.apache.org/jira/browse/HDFS-6425
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma


Sometimes we have large number of over replicates when NN fails over. When the 
new active NN took over, over replicated blocks will be put to 
postponedMisreplicatedBlocks until all DNs for that block aren't stale anymore.

We have a case where NNs flip flop. Before postponedMisreplicatedBlocks became 
empty, NN fail over again and again. So postponedMisreplicatedBlocks just kept 
increasing until the cluster is stable. 

In addition, large postponedMisreplicatedBlocks could make 
rescanPostponedMisreplicatedBlocks slow. rescanPostponedMisreplicatedBlocks 
takes write lock. So it could slow down the block report processing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6425) reset postponedMisreplicatedBlocks and postponedMisreplicatedBlocksCount when NN becomes active

2014-05-16 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-6425:
--

Attachment: HDFS-6425.patch

The patch reset rescanPostponedMisreplicatedBlocks and 
postponedMisreplicatedBlocksCount as part of queues initialization. Extra test 
case will require more work if folks want it. 

> reset postponedMisreplicatedBlocks and postponedMisreplicatedBlocksCount when 
> NN becomes active
> ---
>
> Key: HDFS-6425
> URL: https://issues.apache.org/jira/browse/HDFS-6425
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-6425.patch
>
>
> Sometimes we have large number of over replicates when NN fails over. When 
> the new active NN took over, over replicated blocks will be put to 
> postponedMisreplicatedBlocks until all DNs for that block aren't stale 
> anymore.
> We have a case where NNs flip flop. Before postponedMisreplicatedBlocks 
> became empty, NN fail over again and again. So postponedMisreplicatedBlocks 
> just kept increasing until the cluster is stable. 
> In addition, large postponedMisreplicatedBlocks could make 
> rescanPostponedMisreplicatedBlocks slow. rescanPostponedMisreplicatedBlocks 
> takes write lock. So it could slow down the block report processing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6252) Phase out the old web UI in HDFS

2014-05-16 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000682#comment-14000682
 ] 

Benoy Antony commented on HDFS-6252:


Could you please commit this to branch-2 also ?
A subsequent patch (HADOOP-10566) works for trunk , but fails to apply for 
trunk.



> Phase out the old web UI in HDFS
> 
>
> Key: HDFS-6252
> URL: https://issues.apache.org/jira/browse/HDFS-6252
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.5.0
>Reporter: Fengdong Yu
>Assignee: Haohui Mai
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
> HDFS-6252.002.patch, HDFS-6252.003.patch, HDFS-6252.004.patch, 
> HDFS-6252.005.patch, HDFS-6252.006.patch
>
>
> We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
> from "download this file" on the browseDirectory.jsp, it will throw an error:
> Problem accessing /streamFile/***
> because streamFile servlet was deleted in HDFS-5570.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6424) blockReport doesn't need to invalidate blocks on SBN

2014-05-16 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-6424:
--

Status: Patch Available  (was: Open)

> blockReport doesn't need to invalidate blocks on SBN
> 
>
> Key: HDFS-6424
> URL: https://issues.apache.org/jira/browse/HDFS-6424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-6424.patch
>
>
> After the fix in https://issues.apache.org/jira/browse/HDFS-6178, 
> blockManager no longer compute pending replication work on SBN. As part of 
> that, it also stop removing items from invalidateBlocks.
> blocks can still be added to invalidateBlocks on SBN as part of blockReport. 
> As a result, the PendingDeletionBlocks metrics will keep going up on SBN.
> To fix that, we don't need to add blocks to invalidateBlocks during 
> blockReport for SBN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6424) blockReport doesn't need to invalidate blocks on SBN

2014-05-16 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-6424:
--

Attachment: HDFS-6424.patch

Here is the patch. The check is in addToInvalidates functions to cover 
different ways of blocks could be invalidated. Extra test case will require 
more work if folks want it.

> blockReport doesn't need to invalidate blocks on SBN
> 
>
> Key: HDFS-6424
> URL: https://issues.apache.org/jira/browse/HDFS-6424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-6424.patch
>
>
> After the fix in https://issues.apache.org/jira/browse/HDFS-6178, 
> blockManager no longer compute pending replication work on SBN. As part of 
> that, it also stop removing items from invalidateBlocks.
> blocks can still be added to invalidateBlocks on SBN as part of blockReport. 
> As a result, the PendingDeletionBlocks metrics will keep going up on SBN.
> To fix that, we don't need to add blocks to invalidateBlocks during 
> blockReport for SBN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6420) DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup

2014-05-16 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6420:


Status: Patch Available  (was: Open)

> DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
> --
>
> Key: HDFS-6420
> URL: https://issues.apache.org/jira/browse/HDFS-6420
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6420.000.patch
>
>
> Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command 
> is sent to the NameNode first specified in the configuration by default. 
> Users can use "-fs" option to specify which NN to connect to, but in this 
> case, they usually need to send two separate commands. We should let 
> refreshNodes be sent to both NameNodes by default.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6409) Fix typo in log message about NameNode layout version upgrade.

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000639#comment-14000639
 ] 

Hadoop QA commented on HDFS-6409:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645319/HDFS-6409.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6922//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6922//console

This message is automatically generated.

> Fix typo in log message about NameNode layout version upgrade.
> --
>
> Key: HDFS-6409
> URL: https://issues.apache.org/jira/browse/HDFS-6409
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chen He
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-6409.patch
>
>
> During startup, the NameNode logs a message if the existing metadata is using 
> an old layout version.  This message contains a minor typo.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000635#comment-14000635
 ] 

Hadoop QA commented on HDFS-4167:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645278/HDFS-4167.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestHarFileSystem
  org.apache.hadoop.fs.TestFilterFileSystem
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6921//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6921//console

This message is automatically generated.

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch, HDFS-4167.004.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6423) Diskspace quota usage is wrongly updated when appending data from partial block

2014-05-16 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6423:


Attachment: HDFS-6423.000.patch

Upload a patch to fix. The patch checks the original last block length before 
updating the quota usage. Also add two unit tests.

> Diskspace quota usage is wrongly updated when appending data from partial 
> block
> ---
>
> Key: HDFS-6423
> URL: https://issues.apache.org/jira/browse/HDFS-6423
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6423.000.patch
>
>
> When appending new data to a file whose last block is a partial block, the 
> diskspace quota usage is not correctly update. For example, suppose the block 
> size is 1024 bytes, and a file has size 1536 bytes (1.5 blocks). If we then 
> append another 1024 bytes to the file, the diskspace usage for this file will 
> not be updated to (2560 * replication) as expected, but (2048 * replication).
> The cause of the issue is that in FSNamesystem#commitOrCompleteLastBlock, we 
> have 
> {code}
> // Adjust disk space consumption if required
> final long diff = fileINode.getPreferredBlockSize() - 
> commitBlock.getNumBytes();
> if (diff > 0) {
>   try {
> String path = fileINode.getFullPathName();
> dir.updateSpaceConsumed(path, 0, 
> -diff*fileINode.getFileReplication());
>   } catch (IOException e) {
> LOG.warn("Unexpected exception while updating disk space.", e);
>   }
> }
> {code}
> This code assumes that the last block of the file has never been completed 
> before, thus is always counted with the preferred block size in quota 
> computation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-2006) ability to support storing extended attributes per file

2014-05-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-2006:
-

Hadoop Flags: Incompatible change

> ability to support storing extended attributes per file
> ---
>
> Key: HDFS-2006
> URL: https://issues.apache.org/jira/browse/HDFS-2006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: dhruba borthakur
>Assignee: Yi Liu
> Attachments: ExtendedAttributes.html, HDFS-2006-Merge-1.patch, 
> HDFS-XAttrs-Design-1.pdf, HDFS-XAttrs-Design-2.pdf, HDFS-XAttrs-Design-3.pdf, 
> Test-Plan-for-Extended-Attributes-1.pdf, xattrs.1.patch, xattrs.patch
>
>
> It would be nice if HDFS provides a feature to store extended attributes for 
> files, similar to the one described here: 
> http://en.wikipedia.org/wiki/Extended_file_attributes. 
> The challenge is that it has to be done in such a way that a site not using 
> this feature does not waste precious memory resources in the namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6424) blockReport doesn't need to invalidate blocks on SBN

2014-05-16 Thread Ming Ma (JIRA)
Ming Ma created HDFS-6424:
-

 Summary: blockReport doesn't need to invalidate blocks on SBN
 Key: HDFS-6424
 URL: https://issues.apache.org/jira/browse/HDFS-6424
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma


After the fix in https://issues.apache.org/jira/browse/HDFS-6178, blockManager 
no longer compute pending replication work on SBN. As part of that, it also 
stop removing items from invalidateBlocks.

blocks can still be added to invalidateBlocks on SBN as part of blockReport. As 
a result, the PendingDeletionBlocks metrics will keep going up on SBN.

To fix that, we don't need to add blocks to invalidateBlocks during blockReport 
for SBN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6408) Redundant definitions in log4j.properties

2014-05-16 Thread Abhiraj Butala (JIRA)
Abhiraj Butala created HDFS-6408:


 Summary: Redundant definitions in log4j.properties
 Key: HDFS-6408
 URL: https://issues.apache.org/jira/browse/HDFS-6408
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Abhiraj Butala
Priority: Minor


Following definitions in 
'hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties' are 
defined twice and should be removed:

{code}
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
[%t:%C{1}@%L] - %m%n
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6420) DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup

2014-05-16 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000277#comment-14000277
 ] 

Jing Zhao commented on HDFS-6420:
-

I also did some system tests in the datanode decommission/recommission process. 
The patch works fine in my test.

> DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
> --
>
> Key: HDFS-6420
> URL: https://issues.apache.org/jira/browse/HDFS-6420
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6420.000.patch
>
>
> Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command 
> is sent to the NameNode first specified in the configuration by default. 
> Users can use "-fs" option to specify which NN to connect to, but in this 
> case, they usually need to send two separate commands. We should let 
> refreshNodes be sent to both NameNodes by default.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HDFS-6397:


Attachment: HDFS-6397.3.patch

Thanks again [~kihwal] for your quick response.

Uploaded the catch to address the Test case failure.


> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Critical
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch, HDFS-6397.3.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6406) Add capability for NFS gateway to reject connections from unprivileged ports

2014-05-16 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000568#comment-14000568
 ] 

Aaron T. Myers commented on HDFS-6406:
--

Thanks, Brandon. Sorry about that. If you do find anything, please CC me on the 
new JIRA you file.

> Add capability for NFS gateway to reject connections from unprivileged ports
> 
>
> Key: HDFS-6406
> URL: https://issues.apache.org/jira/browse/HDFS-6406
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 2.5.0
>
> Attachments: HDFS-6406.patch, HDFS-6406.patch
>
>
> Many NFS servers have the ability to only accept client connections 
> originating from privileged ports. It would be nice if the HDFS NFS gateway 
> had the same feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2014-05-16 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998656#comment-13998656
 ] 

Tsz Wo Nicholas Sze commented on HDFS-4167:
---

Sure, let's make it as a metadata only operation and handle truncate later.

> Add support for restoring/rolling back to a snapshot
> 
>
> Key: HDFS-4167
> URL: https://issues.apache.org/jira/browse/HDFS-4167
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Suresh Srinivas
>Assignee: Jing Zhao
> Attachments: HDFS-4167.000.patch, HDFS-4167.001.patch, 
> HDFS-4167.002.patch, HDFS-4167.003.patch
>
>
> This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6421) RHEL4 fails to compile vecsum.c

2014-05-16 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HDFS-6421:


Status: Patch Available  (was: Open)

> RHEL4 fails to compile vecsum.c
> ---
>
> Key: HDFS-6421
> URL: https://issues.apache.org/jira/browse/HDFS-6421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.5.0
> Environment: RHEL4
>Reporter: Jason Lowe
>Assignee: Mit Desai
> Attachments: HDFS-6421.patch
>
>
> After HDFS-6287 RHEL4 builds fail trying to compile vecsum.c since they don't 
> have RUSAGE_THREAD.  RHEL4 is ancient, but we use it in a 32-bit 
> compatibility environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2006) ability to support storing extended attributes per file

2014-05-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000462#comment-14000462
 ] 

Chris Nauroth commented on HDFS-2006:
-

Hi, [~kihwal].  I see you flagged this as an incompatible change.  Is that 
true?  I thought it was backwards-compatible.  Did you find a problem?

> ability to support storing extended attributes per file
> ---
>
> Key: HDFS-2006
> URL: https://issues.apache.org/jira/browse/HDFS-2006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: dhruba borthakur
>Assignee: Yi Liu
> Attachments: ExtendedAttributes.html, HDFS-2006-Merge-1.patch, 
> HDFS-2006-Merge-2.patch, HDFS-XAttrs-Design-1.pdf, HDFS-XAttrs-Design-2.pdf, 
> HDFS-XAttrs-Design-3.pdf, Test-Plan-for-Extended-Attributes-1.pdf, 
> xattrs.1.patch, xattrs.patch
>
>
> It would be nice if HDFS provides a feature to store extended attributes for 
> files, similar to the one described here: 
> http://en.wikipedia.org/wiki/Extended_file_attributes. 
> The challenge is that it has to be done in such a way that a site not using 
> this feature does not waste precious memory resources in the namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6421) RHEL4 fails to compile vecsum.c

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000507#comment-14000507
 ] 

Colin Patrick McCabe commented on HDFS-6421:


Correction: hadoop existed in the later part of 2005 according to Wikipedia.  
But the first release of RHEL4 is still older :)

> RHEL4 fails to compile vecsum.c
> ---
>
> Key: HDFS-6421
> URL: https://issues.apache.org/jira/browse/HDFS-6421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.5.0
> Environment: RHEL4
>Reporter: Jason Lowe
>Assignee: Mit Desai
> Attachments: HDFS-6421.patch
>
>
> After HDFS-6287 RHEL4 builds fail trying to compile vecsum.c since they don't 
> have RUSAGE_THREAD.  RHEL4 is ancient, but we use it in a 32-bit 
> compatibility environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6420) DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000550#comment-14000550
 ] 

Hadoop QA commented on HDFS-6420:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645290/HDFS-6420.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSClientExcludedNodes

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6918//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6918//console

This message is automatically generated.

> DFSAdmin#refreshNodes should be sent to both NameNodes in HA setup
> --
>
> Key: HDFS-6420
> URL: https://issues.apache.org/jira/browse/HDFS-6420
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6420.000.patch
>
>
> Currently in HA setup (with logical URI), the DFSAdmin#refreshNodes command 
> is sent to the NameNode first specified in the configuration by default. 
> Users can use "-fs" option to specify which NN to connect to, but in this 
> case, they usually need to send two separate commands. We should let 
> refreshNodes be sent to both NameNodes by default.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6419) TestBookKeeperHACheckpoints#TestSBNCheckpoints fails on trunk

2014-05-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-6419:
---

Assignee: Akira AJISAKA

> TestBookKeeperHACheckpoints#TestSBNCheckpoints fails on trunk
> -
>
> Key: HDFS-6419
> URL: https://issues.apache.org/jira/browse/HDFS-6419
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: 
> org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints.txt
>
>
> TestBookKeerHACheckpoints#TestSBNCheckpoints fails on trunk.
> See https://builds.apache.org/job/PreCommit-HDFS-Build/6908//testReport/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6375) Listing extended attributes with the search permission

2014-05-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000497#comment-14000497
 ] 

Chris Nauroth commented on HDFS-6375:
-

I agree that handling this as a separate API would be less error-prone, and it 
seems it would be more consistent with existing implementations too.

bq. If you only have scan access then the method would not even return the name.

I didn't quite follow this statement, because I thought the point of scan 
access was to allow the user to see the full list of xattr names even if that 
user isn't authorized to read the values.  Could you please clarify?

> Listing extended attributes with the search permission
> --
>
> Key: HDFS-6375
> URL: https://issues.apache.org/jira/browse/HDFS-6375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Andrew Wang
>Assignee: Charles Lamb
> Attachments: HDFS-6375.1.patch, HDFS-6375.2.patch, HDFS-6375.3.patch, 
> HDFS-6375.4.patch
>
>
> From the attr(5) manpage:
> {noformat}
>Users with search access to a file or directory may retrieve a list  of
>attribute names defined for that file or directory.
> {noformat}
> This is like doing {{getfattr}} without the {{-d}} flag, which we currently 
> don't support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6361) TestIdUserGroup.testUserUpdateSetting failed due to out of range nfsnobody Id

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999077#comment-13999077
 ] 

Hadoop QA commented on HDFS-6361:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12644651/HDFS-6361.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6904//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6904//console

This message is automatically generated.

> TestIdUserGroup.testUserUpdateSetting failed due to out of range nfsnobody Id
> -
>
> Key: HDFS-6361
> URL: https://issues.apache.org/jira/browse/HDFS-6361
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.4.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.4.1
>
> Attachments: HDFS-6361.001.patch, HDFS-6361.002.patch, 
> HDFS-6361.003.patch
>
>
> The following error happens pretty often:
> org.apache.hadoop.nfs.nfs3.TestIdUserGroup.testUserUpdateSetting
> Failing for the past 1 build (Since Unstable#61 )
> Took 0.1 sec.
> add description
> Error Message
> For input string: "4294967294"
> Stacktrace
> java.lang.NumberFormatException: For input string: "4294967294"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.lang.Integer.parseInt(Integer.java:495)
>   at java.lang.Integer.valueOf(Integer.java:582)
>   at 
> org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMapInternal(IdUserGroup.java:137)
>   at 
> org.apache.hadoop.nfs.nfs3.IdUserGroup.updateMaps(IdUserGroup.java:188)
>   at org.apache.hadoop.nfs.nfs3.IdUserGroup.(IdUserGroup.java:60)
>   at 
> org.apache.hadoop.nfs.nfs3.TestIdUserGroup.testUserUpdateSetting(TestIdUserGroup.java:71)
> Standard Output
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.nfs.nfs3.IdUserGroup).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6409) Fix typo in log message about NameNode layout version upgrade.

2014-05-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999154#comment-13999154
 ] 

Chris Nauroth commented on HDFS-6409:
-

See below.  "...if a rolling upgraded is already started..." should be changed 
to "...if a rolling upgrade is already started...".

{quote}
Please restart NameNode with the "-rollingUpgrade started" option if a rolling 
upgraded is already started; or restart NameNode with the "-upgrade" option to 
start a new upgrade.
{quote}


> Fix typo in log message about NameNode layout version upgrade.
> --
>
> Key: HDFS-6409
> URL: https://issues.apache.org/jira/browse/HDFS-6409
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Priority: Trivial
>  Labels: newbie
>
> During startup, the NameNode logs a message if the existing metadata is using 
> an old layout version.  This message contains a minor typo.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6362) InvalidateBlocks is inconsistent in usage of DatanodeUuid and StorageID

2014-05-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999261#comment-13999261
 ] 

Arpit Agarwal commented on HDFS-6362:
-

The FindBugs warning looks invalid since the patch does not touch 
FsAclPermissions.
bq. org.apache.hadoop.hdfs.protocol.FsAclPermission doesn't override 
org.apache.hadoop.fs.permission.FsPermission.equals(Object)

Also we don't need new tests as per my earlier comment. I will commit this 
shortly. 

> InvalidateBlocks is inconsistent in usage of DatanodeUuid and StorageID
> ---
>
> Key: HDFS-6362
> URL: https://issues.apache.org/jira/browse/HDFS-6362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6362.01.patch, HDFS-6362.02.patch, 
> HDFS-6362.03.patch, HDFS-6362.04.patch
>
>
> {{InvalidateBlocks}} must consistently use datanodeUuid as the key. e.g. add 
> and remove functions use datanode UUID and storage ID.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6056) Clean up NFS config settings

2014-05-16 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6056:
-

Attachment: HDFS-6056.006.patch

Rebased the patch. Please review.

> Clean up NFS config settings
> 
>
> Key: HDFS-6056
> URL: https://issues.apache.org/jira/browse/HDFS-6056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Aaron T. Myers
>Assignee: Brandon Li
> Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch, 
> HDFS-6056.003.patch, HDFS-6056.004.patch, HDFS-6056.005.patch, 
> HDFS-6056.006.patch
>
>
> As discussed on HDFS-6050, there's a few opportunities to improve the config 
> settings related to NFS. This JIRA is to implement those changes, which 
> include: moving hdfs-nfs related properties into hadoop-hdfs-nfs project, and 
> replacing 'nfs3' with 'nfs' in the property names.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-05-16 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HDFS-6230:


Status: Patch Available  (was: Open)

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Mit Desai
> Attachments: HDFS-6230-NoUpgradesInProgress.png, 
> HDFS-6230-UpgradeInProgress.jpg, HDFS-6230.patch, HDFS-6230.patch
>
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-05-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000517#comment-14000517
 ] 

Andrew Wang commented on HDFS-6422:
---

Hey Charles, thanks for working on this. I had a few review comments:

* I see you did some cleanup in XAttrCommands. I think we need to move the 
{{out.println(header)}} in the first if statement up by one line. We should 
still print the header as long as the file exists, even if it doesn't have any 
xattrs.
* There are a couple issues with the exception logic in FSNamesystem. One is if 
the user asks for an xattr in either an unknown or disallowed namespace. 
Another is if the user asks for multiple xattrs and one of them is not present. 
The filtering makes sense for the {{getAll}} case, but for the other case, we 
need to throw if a specifically requested xattr is not available or present.
* We could actually do the bulk of this testing in the Java API, since the 
shell sets the error code on an exception. The Java API is better since it's 
faster, more concise, and lets us more easily verify expected exceptions and 
return values. We could have a short sanity test with the shell, but it can't 
handle things like requesting multiple xattrs right now.

> getfattr in CLI doesn't throw exception or return non-0 return code when 
> xattr doesn't exist
> 
>
> Key: HDFS-6422
> URL: https://issues.apache.org/jira/browse/HDFS-6422
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Charles Lamb
>Assignee: Charles Lamb
> Attachments: HDFS-6422.1.patch
>
>
> If you do
> hdfs dfs -getfattr -n user.blah /foo
> and user.blah doesn't exist, the command prints
> # file: /foo
> and a 0 return code.
> It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6421) RHEL4 fails to compile vecsum.c

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000511#comment-14000511
 ] 

Colin Patrick McCabe commented on HDFS-6421:


Independently of the compatibility questions, let me review the patch.

* remove {{struct rusage}} if you are not going to use it
* please remove the include malloc.h line

+1 once those are addressed

> RHEL4 fails to compile vecsum.c
> ---
>
> Key: HDFS-6421
> URL: https://issues.apache.org/jira/browse/HDFS-6421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.5.0
> Environment: RHEL4
>Reporter: Jason Lowe
>Assignee: Mit Desai
> Attachments: HDFS-6421.patch
>
>
> After HDFS-6287 RHEL4 builds fail trying to compile vecsum.c since they don't 
> have RUSAGE_THREAD.  RHEL4 is ancient, but we use it in a 32-bit 
> compatibility environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6056) Clean up NFS config settings

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999494#comment-13999494
 ] 

Hadoop QA commented on HDFS-6056:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12644904/HDFS-6056.005.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6912//console

This message is automatically generated.

> Clean up NFS config settings
> 
>
> Key: HDFS-6056
> URL: https://issues.apache.org/jira/browse/HDFS-6056
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Aaron T. Myers
>Assignee: Brandon Li
> Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch, 
> HDFS-6056.003.patch, HDFS-6056.004.patch, HDFS-6056.005.patch
>
>
> As discussed on HDFS-6050, there's a few opportunities to improve the config 
> settings related to NFS. This JIRA is to implement those changes, which 
> include: moving hdfs-nfs related properties into hadoop-hdfs-nfs project, and 
> replacing 'nfs3' with 'nfs' in the property names.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000559#comment-14000559
 ] 

Hadoop QA commented on HDFS-6397:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645164/HDFS-6397.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6919//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6919//console

This message is automatically generated.

> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Critical
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-16 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998517#comment-13998517
 ] 

Konstantin Shvachko commented on HDFS-6325:
---

This looks good. Just two final touches:
- In appendFileInternal() you can actually combine nested if statements into 
one with three conditions.
- In testAppendInsufficientLocations() you should use {{LOG.info("message", 
e);}} instead of {{+ e.getMessage()}}

The patch cleanly applies to branch-2 and branch-2.4. 
It would be nice to have it in 2.4.1. If there are no there objections?

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325.patch, 
> HDFS-6325_test.patch, appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4913:
---

  Resolution: Fixed
   Fix Version/s: 2.5.0
Target Version/s: 2.5.0
  Status: Resolved  (was: Patch Available)

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 2.5.0
>
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Met

[jira] [Commented] (HDFS-6374) setXAttr should require the user to be the owner of the file or directory

2014-05-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000495#comment-14000495
 ] 

Andrew Wang commented on HDFS-6374:
---

+1 from me as well, though I would have preferred these tests to be against the 
java API for brevity. Will commit shortly.

> setXAttr should require the user to be the owner of the file or directory
> -
>
> Key: HDFS-6374
> URL: https://issues.apache.org/jira/browse/HDFS-6374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Andrew Wang
>Assignee: Charles Lamb
> Attachments: HDFS-6374.1.patch, HDFS-6374.2.patch, HDFS-6374.3.patch
>
>
> From the attr(5) manpage:
> {noformat}
>For  this reason, extended user attributes are only allowed for regular
>files and directories,  and  access  to  extended  user  attributes  is
>restricted  to the owner and to users with appropriate capabilities for
>directories with the sticky bit set (see the chmod(1) manual  page  for
>an explanation of Sticky Directories).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999001#comment-13999001
 ] 

Hudson commented on HDFS-6287:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5605/])
HDFS-6287. Add vecsum test of libhdfs read access times (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1594751)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test/vecsum.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test_libhdfs_threaded.c


> Add vecsum test of libhdfs read access times
> 
>
> Key: HDFS-6287
> URL: https://issues.apache.org/jira/browse/HDFS-6287
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: libhdfs, test
>Affects Versions: 2.5.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: HDFS-6282.001.patch, HDFS-6287.002.patch, 
> HDFS-6287.003.patch, HDFS-6287.004.patch, HDFS-6287.005.patch, 
> HDFS-6287.006.patch
>
>
> Add vecsum, a benchmark that tests libhdfs access times.  This includes 
> short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
> local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2006) ability to support storing extended attributes per file

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000563#comment-14000563
 ] 

Hadoop QA commented on HDFS-2006:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12645209/HDFS-2006-Merge-2.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6923//console

This message is automatically generated.

> ability to support storing extended attributes per file
> ---
>
> Key: HDFS-2006
> URL: https://issues.apache.org/jira/browse/HDFS-2006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: dhruba borthakur
>Assignee: Yi Liu
> Attachments: ExtendedAttributes.html, HDFS-2006-Merge-1.patch, 
> HDFS-2006-Merge-2.patch, HDFS-XAttrs-Design-1.pdf, HDFS-XAttrs-Design-2.pdf, 
> HDFS-XAttrs-Design-3.pdf, Test-Plan-for-Extended-Attributes-1.pdf, 
> xattrs.1.patch, xattrs.patch
>
>
> It would be nice if HDFS provides a feature to store extended attributes for 
> files, similar to the one described here: 
> http://en.wikipedia.org/wiki/Extended_file_attributes. 
> The challenge is that it has to be done in such a way that a site not using 
> this feature does not waste precious memory resources in the namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6422:
--

Assignee: Charles Lamb

> getfattr in CLI doesn't throw exception or return non-0 return code when 
> xattr doesn't exist
> 
>
> Key: HDFS-6422
> URL: https://issues.apache.org/jira/browse/HDFS-6422
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Charles Lamb
>Assignee: Charles Lamb
> Attachments: HDFS-6422.1.patch
>
>
> If you do
> hdfs dfs -getfattr -n user.blah /foo
> and user.blah doesn't exist, the command prints
> # file: /foo
> and a 0 return code.
> It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6423) Diskspace quota usage is wrongly updated when appending data from partial block

2014-05-16 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6423:


Status: Patch Available  (was: Open)

> Diskspace quota usage is wrongly updated when appending data from partial 
> block
> ---
>
> Key: HDFS-6423
> URL: https://issues.apache.org/jira/browse/HDFS-6423
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6423.000.patch
>
>
> When appending new data to a file whose last block is a partial block, the 
> diskspace quota usage is not correctly update. For example, suppose the block 
> size is 1024 bytes, and a file has size 1536 bytes (1.5 blocks). If we then 
> append another 1024 bytes to the file, the diskspace usage for this file will 
> not be updated to (2560 * replication) as expected, but (2048 * replication).
> The cause of the issue is that in FSNamesystem#commitOrCompleteLastBlock, we 
> have 
> {code}
> // Adjust disk space consumption if required
> final long diff = fileINode.getPreferredBlockSize() - 
> commitBlock.getNumBytes();
> if (diff > 0) {
>   try {
> String path = fileINode.getFullPathName();
> dir.updateSpaceConsumed(path, 0, 
> -diff*fileINode.getFileReplication());
>   } catch (IOException e) {
> LOG.warn("Unexpected exception while updating disk space.", e);
>   }
> }
> {code}
> This code assumes that the last block of the file has never been completed 
> before, thus is always counted with the preferred block size in quota 
> computation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999026#comment-13999026
 ] 

Hudson commented on HDFS-6305:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5605/])
HDFS-6305. WebHdfs response decoding may throw RuntimeExceptions (Daryn Sharp 
via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1594273)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


> WebHdfs response decoding may throw RuntimeExceptions
> -
>
> Key: HDFS-6305
> URL: https://issues.apache.org/jira/browse/HDFS-6305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6305.patch
>
>
> WebHdfs does not guard against exceptions while decoding the response 
> payload.  The json parser will throw RunTime exceptions on malformed 
> responses.  The json decoding routines do not validate the expected fields 
> are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6412) Interface audience and stability annotations missing from several new classes related to xattrs.

2014-05-16 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-6412:
---

 Summary: Interface audience and stability annotations missing from 
several new classes related to xattrs.
 Key: HDFS-6412
 URL: https://issues.apache.org/jira/browse/HDFS-6412
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Chris Nauroth
Priority: Minor
 Attachments: HDFS-6412.1.patch

Let's add the appropriate interface audience and stability annotations to the 
following new classes related to xattrs: {{XAttr}}, {{NNConf}} and 
{{XAttrHelper}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6374) setXAttr should require the user to be the owner of the file or directory

2014-05-16 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999627#comment-13999627
 ] 

Yi Liu commented on HDFS-6374:
--

Thanks Charles, +1, looks good to me.

> setXAttr should require the user to be the owner of the file or directory
> -
>
> Key: HDFS-6374
> URL: https://issues.apache.org/jira/browse/HDFS-6374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Andrew Wang
>Assignee: Charles Lamb
> Attachments: HDFS-6374.1.patch, HDFS-6374.2.patch, HDFS-6374.3.patch
>
>
> From the attr(5) manpage:
> {noformat}
>For  this reason, extended user attributes are only allowed for regular
>files and directories,  and  access  to  extended  user  attributes  is
>restricted  to the owner and to users with appropriate capabilities for
>directories with the sticky bit set (see the chmod(1) manual  page  for
>an explanation of Sticky Directories).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6409) Fix typo in log message about NameNode layout version upgrade.

2014-05-16 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HDFS-6409:
--

Attachment: HDFS-6409.patch

patch attached.

> Fix typo in log message about NameNode layout version upgrade.
> --
>
> Key: HDFS-6409
> URL: https://issues.apache.org/jira/browse/HDFS-6409
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Priority: Trivial
>  Labels: newbie
> Attachments: HDFS-6409.patch
>
>
> During startup, the NameNode logs a message if the existing metadata is using 
> an old layout version.  This message contains a minor typo.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6422:
--

Affects Version/s: HDFS XAttrs (HDFS-2006)

> getfattr in CLI doesn't throw exception or return non-0 return code when 
> xattr doesn't exist
> 
>
> Key: HDFS-6422
> URL: https://issues.apache.org/jira/browse/HDFS-6422
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Charles Lamb
>Assignee: Charles Lamb
> Attachments: HDFS-6422.1.patch
>
>
> If you do
> hdfs dfs -getfattr -n user.blah /foo
> and user.blah doesn't exist, the command prints
> # file: /foo
> and a 0 return code.
> It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6375) Listing extended attributes with the search permission

2014-05-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000117#comment-14000117
 ] 

Andrew Wang commented on HDFS-6375:
---

Hey Charles, could you rebase this patch for me? It doesn't apply after the 
recent series of commits that went in.

> Listing extended attributes with the search permission
> --
>
> Key: HDFS-6375
> URL: https://issues.apache.org/jira/browse/HDFS-6375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Andrew Wang
>Assignee: Charles Lamb
> Attachments: HDFS-6375.1.patch, HDFS-6375.2.patch, HDFS-6375.3.patch
>
>
> From the attr(5) manpage:
> {noformat}
>Users with search access to a file or directory may retrieve a list  of
>attribute names defined for that file or directory.
> {noformat}
> This is like doing {{getfattr}} without the {{-d}} flag, which we currently 
> don't support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6406) Add capability for NFS gateway to reject connections from unprivileged ports

2014-05-16 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000314#comment-14000314
 ] 

Aaron T. Myers commented on HDFS-6406:
--

bq. SLF4J doesn't let you at the log4j appender behind a log, so switching to 
log4j would break your test. The workaround there is to use commons logging in 
the test and request the same log -which is what most of today's tests do. If 
you stick with commons logging, it's a non-issue.

Note that the test does not actually rely on the increased log level - that was 
just for convenience so that I could look at the logs after running it.

bq. this would be good time to replace the inline "nfs3.mountd.port", with a 
constant.

Agreed, but I've been deliberately avoiding messing with the NFS-related 
configs because HDFS-6056 is doing a larger cleanup of that whole thing.

Sounds like there are no objections to this patch, despite the inherent limits 
of the approach. I'm going to go ahead and commit this shortly based on 
Andrew's +1.

> Add capability for NFS gateway to reject connections from unprivileged ports
> 
>
> Key: HDFS-6406
> URL: https://issues.apache.org/jira/browse/HDFS-6406
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-6406.patch, HDFS-6406.patch
>
>
> Many NFS servers have the ability to only accept client connections 
> originating from privileged ports. It would be nice if the HDFS NFS gateway 
> had the same feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000406#comment-14000406
 ] 

Colin Patrick McCabe commented on HDFS-4913:


The FindBugs warning is clearly bogus since this patch doesn't change any Java 
code (and findbugs only operates on java).  Similarly with the 
{{TestBPOfferService}} test failure.

Thanks for the reviews-- committing...

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
> 

[jira] [Updated] (HDFS-6362) InvalidateBlocks is inconsistent in usage of DatanodeUuid and StorageID

2014-05-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6362:


   Resolution: Fixed
Fix Version/s: 2.4.1
   3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2 and branch-2.4.

The merge to branch-2 was unexpectedly complex due to code divergence between 
trunk and branch-2. I also merged down HDFS-4052 to reduce the divergence 
between the branches.

Thanks for the review [~cnauroth].

> InvalidateBlocks is inconsistent in usage of DatanodeUuid and StorageID
> ---
>
> Key: HDFS-6362
> URL: https://issues.apache.org/jira/browse/HDFS-6362
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6362.01.patch, HDFS-6362.02.patch, 
> HDFS-6362.03.patch, HDFS-6362.04.patch
>
>
> {{InvalidateBlocks}} must consistently use datanodeUuid as the key. e.g. add 
> and remove functions use datanode UUID and storage ID.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6355) Fix divide-by-zero, improper use of wall-clock time in BlockPoolSliceScanner

2014-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999006#comment-13999006
 ] 

Hudson commented on HDFS-6355:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5605/])
HDFS-6355. Fix divide-by-zero, improper use of wall-clock time in 
BlockPoolSliceScanner (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1594338)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeBlockScanner.java


> Fix divide-by-zero, improper use of wall-clock time in BlockPoolSliceScanner
> 
>
> Key: HDFS-6355
> URL: https://issues.apache.org/jira/browse/HDFS-6355
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.5.0
>
> Attachments: HDFS-6355.001.patch
>
>
> BlockPoolSliceScanner uses {{Time.now}} to calculate an interval.  But this 
> is incorrect, since if the wall-clock time changes, we will end up setting 
> the scan periods to a shorter or longer time than we configured.
> There is also a case where we may divide by zero if we get unlucky, because 
> we calculate an interval and divide by it, without checking whether the 
> interval is 0 milliseconds.  This would produce an {{ArithmeticException}} 
> since we are using longs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-05-16 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6422:
---

Attachment: HDFS-6422.1.patch

> getfattr in CLI doesn't throw exception or return non-0 return code when 
> xattr doesn't exist
> 
>
> Key: HDFS-6422
> URL: https://issues.apache.org/jira/browse/HDFS-6422
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Charles Lamb
> Attachments: HDFS-6422.1.patch
>
>
> If you do
> hdfs dfs -getfattr -n user.blah /foo
> and user.blah doesn't exist, the command prints
> # file: /foo
> and a 0 return code.
> It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6411) nfs-hdfs-gateway mount raises I/O error and hangs when a unauthorized user attempts to access it

2014-05-16 Thread Zhongyi Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhongyi Xie updated HDFS-6411:
--

Description: 
We use the nfs-hdfs gateway to expose hdfs thru nfs.

0) login as root, run nfs-hdfs gateway as a user, say, nfsserver. 
[root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
backups  hive  mr-history  system  tmp  user
1) add a user nfs-test: adduser nfs-test(make sure that this user is not a 
proxyuser of nfsserver
2) switch to test user: su - nfs-test
3) access hdfs nfs gateway
[nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
ls: cannot open directory /hdfs: Input/output error
retry:
[nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
ls: cannot access /hdfs: Stale NFS file handle
4) switch back to root and access hdfs nfs gateway
[nfs-test@zhongyi-test-cluster-desktop ~]$ exit
logout
[root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
ls: cannot access /hdfs: Stale NFS file handle


the nfsserver log indicates we hit an authorization error in the rpc handler; 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 User: nfsserver is not allowed to impersonate nfs-test
and NFS3ERR_IO is returned, which explains why we see input/output error. 
One can catch the authorizationexception and return the correct error: 
NFS3ERR_ACCES to fix the error message on the client side but that doesn't seem 
to solve the mount hang issue though. When the mount hang happens, it stops 
printing nfsserver log which makes it more difficult to figure out the real 
cause of the hang. According to jstack and debugger, the nfsserver seems to be 
waiting for client requests

  was:
We the nfs-hdfs gateway to expose hdfs thru nfs.

0) login as root, run nfs-hdfs gateway as a user, say, nfsserver. 
[root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
backups  hive  mr-history  system  tmp  user
1) add a user nfs-test: adduser nfs-test(make sure that this user is not a 
proxyuser of nfsserver
2) switch to test user: su - nfs-test
3) access hdfs nfs gateway
[nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
ls: cannot open directory /hdfs: Input/output error
retry:
[nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
ls: cannot access /hdfs: Stale NFS file handle
4) switch back to root and access hdfs nfs gateway
[nfs-test@zhongyi-test-cluster-desktop ~]$ exit
logout
[root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
ls: cannot access /hdfs: Stale NFS file handle


the nfsserver log indicates we hit an authorization error in the rpc handler; 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 User: nfsserver is not allowed to impersonate nfs-test
and NFS3ERR_IO is returned, which explains why we see input/output error. 
One can catch the authorizationexception and return the correct error: 
NFS3ERR_ACCES to fix the error message on the client side but that doesn't seem 
to solve the mount hang issue though. When the mount hang happens, it stops 
printing nfsserver log which makes it more difficult to figure out the real 
cause of the hang. According to jstack and debugger, the nfsserver seems to be 
waiting for client requests


> nfs-hdfs-gateway mount raises I/O error and hangs when a unauthorized user 
> attempts to access it
> 
>
> Key: HDFS-6411
> URL: https://issues.apache.org/jira/browse/HDFS-6411
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Zhongyi Xie
>
> We use the nfs-hdfs gateway to expose hdfs thru nfs.
> 0) login as root, run nfs-hdfs gateway as a user, say, nfsserver. 
> [root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
> backups  hive  mr-history  system  tmp  user
> 1) add a user nfs-test: adduser nfs-test(make sure that this user is not a 
> proxyuser of nfsserver
> 2) switch to test user: su - nfs-test
> 3) access hdfs nfs gateway
> [nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
> ls: cannot open directory /hdfs: Input/output error
> retry:
> [nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
> ls: cannot access /hdfs: Stale NFS file handle
> 4) switch back to root and access hdfs nfs gateway
> [nfs-test@zhongyi-test-cluster-desktop ~]$ exit
> logout
> [root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
> ls: cannot access /hdfs: Stale NFS file handle
> the nfsserver log indicates we hit an authorization error in the rpc handler; 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User: nfsserver is not allowed to impersonate nfs-test
> and NFS3ERR_IO is returned, which explains why we see input/output error. 
> One can catch the authorizationexception and return the correct error: 
> NFS3ERR_ACCES to fix the error message on the client si

[jira] [Updated] (HDFS-6134) Transparent data at rest encryption

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6134:
--

Target Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)

> Transparent data at rest encryption
> ---
>
> Key: HDFS-6134
> URL: https://issues.apache.org/jira/browse/HDFS-6134
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.3.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HDFSDataAtRestEncryption.pdf
>
>
> Because of privacy and security regulations, for many industries, sensitive 
> data at rest must be in encrypted form. For example: the health­care industry 
> (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
> US government (FISMA regulations).
> This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
> be used transparently by any application accessing HDFS via Hadoop Filesystem 
> Java API, Hadoop libhdfs C library, or WebHDFS REST API.
> The resulting implementation should be able to be used in compliance with 
> different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6412) Interface audience and stability annotations missing from several new classes related to xattrs.

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-6412:
-

Assignee: Andrew Wang  (was: Yi Liu)

> Interface audience and stability annotations missing from several new classes 
> related to xattrs.
> 
>
> Key: HDFS-6412
> URL: https://issues.apache.org/jira/browse/HDFS-6412
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Chris Nauroth
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6412.1.patch
>
>
> Let's add the appropriate interface audience and stability annotations to the 
> following new classes related to xattrs: {{XAttr}}, {{NNConf}} and 
> {{XAttrHelper}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-16 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999637#comment-13999637
 ] 

Konstantin Shvachko commented on HDFS-6325:
---

+1 looks good.

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325.patch, 
> HDFS-6325.patch, HDFS-6325_test.patch, appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-5621) NameNode: add indicator in web UI file system browser if a file has an ACL.

2014-05-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5621.
-

Resolution: Duplicate

I'm resolving this as duplicate of HDFS-6326.  I ended up implementing this 
within the scope of that patch, so the indicator will be there in the 2.4.1 
release.

> NameNode: add indicator in web UI file system browser if a file has an ACL.
> ---
>
> Key: HDFS-5621
> URL: https://issues.apache.org/jira/browse/HDFS-5621
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Chris Nauroth
>Assignee: Haohui Mai
> Attachments: HDFS-5621.000.patch
>
>
> Change the file system browser to append the '+' character to permissions of 
> any file or directory that has an ACL.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6421) RHEL4 fails to compile vecsum.c

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000506#comment-14000506
 ] 

Colin Patrick McCabe commented on HDFS-6421:


I've actually never heard of anyone deploying Hadoop on RHEL4 (when it was 
released in 2005, Hadoop didn't exist).  However, I did manage to track down a 
RHEL4 virtual machine and try to compile.  I noticed that a bunch of stuff 
didn't work, not only {{vecsum}}.  For example, you get this error when 
building YARN:

{code}
 [exec] libcontainer.a(container-executor.c.o)(.text+0x593): In function 
`mkdirs':
 [exec] 
/home/cmccabe/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:378:
 undefined reference to `mkdirat'
 [exec] 
libcontainer.a(container-executor.c.o)(.text+0x5b4):/home/cmccabe/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c:387:
 undefined reference to `openat'
{code}

It's going to be tough to "fix" this since RHEL4 just doesn't have big pieces 
of necessary functionality, like cgroups.

Maybe this is a dumb question on my part, but I wonder if compiling without 
{{Pnative}} is a possibility for you?  We dropped support for JDK5, which was 
released just a few months after RHEL4.  Is it realistic to compile trunk on a 
10 year-old OS?  What do you guys think?

I definitely think we should fix the malloc.h thing, though, to compile on 
modern BSD systems.

> RHEL4 fails to compile vecsum.c
> ---
>
> Key: HDFS-6421
> URL: https://issues.apache.org/jira/browse/HDFS-6421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.5.0
> Environment: RHEL4
>Reporter: Jason Lowe
>Assignee: Mit Desai
> Attachments: HDFS-6421.patch
>
>
> After HDFS-6287 RHEL4 builds fail trying to compile vecsum.c since they don't 
> have RUSAGE_THREAD.  RHEL4 is ancient, but we use it in a 32-bit 
> compatibility environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6419) TestBookKeeperHACheckpoints#TestSBNCheckpoints fails on trunk

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000459#comment-14000459
 ] 

Hadoop QA commented on HDFS-6419:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645283/HDFS-6419.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6920//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6920//console

This message is automatically generated.

> TestBookKeeperHACheckpoints#TestSBNCheckpoints fails on trunk
> -
>
> Key: HDFS-6419
> URL: https://issues.apache.org/jira/browse/HDFS-6419
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HDFS-6419.patch, 
> org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints.txt
>
>
> TestBookKeerHACheckpoints#TestSBNCheckpoints fails on trunk.
> See https://builds.apache.org/job/PreCommit-HDFS-Build/6908//testReport/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6374) setXAttr should require the user to be the owner of the file or directory

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-6374.
---

   Resolution: Fixed
Fix Version/s: HDFS XAttrs (HDFS-2006)

Committed to branch.

> setXAttr should require the user to be the owner of the file or directory
> -
>
> Key: HDFS-6374
> URL: https://issues.apache.org/jira/browse/HDFS-6374
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Andrew Wang
>Assignee: Charles Lamb
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6374.1.patch, HDFS-6374.2.patch, HDFS-6374.3.patch
>
>
> From the attr(5) manpage:
> {noformat}
>For  this reason, extended user attributes are only allowed for regular
>files and directories,  and  access  to  extended  user  attributes  is
>restricted  to the owner and to users with appropriate capabilities for
>directories with the sticky bit set (see the chmod(1) manual  page  for
>an explanation of Sticky Directories).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6314) Test cases for XAttrs

2014-05-16 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6314:
-

Attachment: HDFS-6314.4.patch

Thanks Uma for review. I have updated them in the new patch.

> Test cases for XAttrs
> -
>
> Key: HDFS-6314
> URL: https://issues.apache.org/jira/browse/HDFS-6314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Yi Liu
>Assignee: Yi Liu
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6314.1.patch, HDFS-6314.2.patch, HDFS-6314.3.patch, 
> HDFS-6314.4.patch, HDFS-6314.patch
>
>
> Tests NameNode interaction for all XAttr APIs, covers restarting NN, saving 
> new checkpoint.
> Tests XAttr for Snapshot, symlinks.
> Tests XAttr for HA failover.
> And more...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-05-16 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6422:
---

Attachment: HDFS-6422.1.patch

I've attached a patch that addresses this issue.


> getfattr in CLI doesn't throw exception or return non-0 return code when 
> xattr doesn't exist
> 
>
> Key: HDFS-6422
> URL: https://issues.apache.org/jira/browse/HDFS-6422
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Charles Lamb
> Attachments: HDFS-6422.1.patch
>
>
> If you do
> hdfs dfs -getfattr -n user.blah /foo
> and user.blah doesn't exist, the command prints
> # file: /foo
> and a 0 return code.
> It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-05-16 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-6422:
--

 Summary: getfattr in CLI doesn't throw exception or return non-0 
return code when xattr doesn't exist
 Key: HDFS-6422
 URL: https://issues.apache.org/jira/browse/HDFS-6422
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Charles Lamb


If you do

hdfs dfs -getfattr -n user.blah /foo

and user.blah doesn't exist, the command prints

# file: /foo

and a 0 return code.

It should print an exception and return a non-0 return code instead.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6421) RHEL4 fails to compile vecsum.c

2014-05-16 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai updated HDFS-6421:


Attachment: HDFS-6421.patch

This code in the stopwatch structure gets the rusage and stores it into 
{{struct rusage rusage;}} but it is never used. 
{code}
if (getrusage(RUSAGE_THREAD, &watch->rusage) < 0) {
int err = errno;
fprintf(stderr, "getrusage failed: error %d (%s)\n",
err, strerror(err));
goto error;
}
{code}

Removing the block as to get REHL4 compiling again.

> RHEL4 fails to compile vecsum.c
> ---
>
> Key: HDFS-6421
> URL: https://issues.apache.org/jira/browse/HDFS-6421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.5.0
> Environment: RHEL4
>Reporter: Jason Lowe
>Assignee: Mit Desai
> Attachments: HDFS-6421.patch
>
>
> After HDFS-6287 RHEL4 builds fail trying to compile vecsum.c since they don't 
> have RUSAGE_THREAD.  RHEL4 is ancient, but we use it in a 32-bit 
> compatibility environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-05-16 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6422:
---

Attachment: (was: HDFS-6422.1.patch)

> getfattr in CLI doesn't throw exception or return non-0 return code when 
> xattr doesn't exist
> 
>
> Key: HDFS-6422
> URL: https://issues.apache.org/jira/browse/HDFS-6422
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Charles Lamb
>
> If you do
> hdfs dfs -getfattr -n user.blah /foo
> and user.blah doesn't exist, the command prints
> # file: /foo
> and a 0 return code.
> It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6421) RHEL4 fails to compile vecsum.c

2014-05-16 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai reassigned HDFS-6421:
---

Assignee: Mit Desai

> RHEL4 fails to compile vecsum.c
> ---
>
> Key: HDFS-6421
> URL: https://issues.apache.org/jira/browse/HDFS-6421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.5.0
> Environment: RHEL4
>Reporter: Jason Lowe
>Assignee: Mit Desai
>
> After HDFS-6287 RHEL4 builds fail trying to compile vecsum.c since they don't 
> have RUSAGE_THREAD.  RHEL4 is ancient, but we use it in a 32-bit 
> compatibility environment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6329) WebHdfs does not work if HA is enabled on NN but logical URI is not configured.

2014-05-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6329:
-

Attachment: HDFS-6329.v5.patch

Removed the debug line

> WebHdfs does not work if HA is enabled on NN but logical URI is not 
> configured.
> ---
>
> Key: HDFS-6329
> URL: https://issues.apache.org/jira/browse/HDFS-6329
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Blocker
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6329.patch, HDFS-6329.patch, HDFS-6329.v2.patch, 
> HDFS-6329.v3.patch, HDFS-6329.v4.patch, HDFS-6329.v5.patch
>
>
> After HDFS-6100, namenode unconditionally puts the logical name (name service 
> id) as the token service when redirecting webhdfs requests to datanodes, if 
> it detects HA.
> For HA configurations with no client-side failover proxy provider (e.g. IP 
> failover), webhdfs does not work since the clients do not use logical name.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6360) MiniDFSCluster can cause unexpected side effects due to sharing of config

2014-05-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999515#comment-13999515
 ] 

Andrew Wang commented on HDFS-6360:
---

This seems like a pretty serious bug. Kihwal, do you have a rough count of how 
many tests are broken? If it's really substantial, we might need to do the 
fixing on a branch to avoid a mega patch, and to help split up the work.

> MiniDFSCluster can cause unexpected side effects due to sharing of config
> -
>
> Key: HDFS-6360
> URL: https://issues.apache.org/jira/browse/HDFS-6360
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>
> As noted in HDFS-6329 and HDFS-5522, certain use cases of MiniDFSCluster can 
> result in unexpected results and falsely failing or passing unit tests.
> Since a {{Configuration}} object is shared for all namenode startups, the 
> modified conf object during a NN startup is passed to the next NN startup.  
> The effect of the modified conf propagation and subsequent modifications is 
> different depending on whether it is a single NN cluster, HA cluster or 
> federation cluster.
> It also depends on what test cases are doing with the config. For example, 
> MiniDFSCluster#getConfiguration(int) returns the saved conf for the specified 
> NN, but that is not actually the conf object used by the NN. It just 
> contained the same content one time in the past and it is not guaranteed to 
> be that way.
> Restarting the same NN can also cause unexpected results. The new NN will 
> switch to the conf that was cloned & saved AFTER the last startup.  The new 
> NN will start with a changed config intentionally or unintentionally.  The 
> config variables such as {{fs.defaultFs}}, {{dfs.namenode.rpc-address}} will 
> be implicitly set differently than the initial condition.  Some test cases 
> rely on this and others occasionally break because of this.
> In summary,
> * MiniDFSCluster does not properly isolate configs.
> * Many test cases happen to work most of times. Correcting MiniDFSCluster 
> causes mass breakages of test cases and requires fixing them.
> * Many test cases rely on broken behavior and might pass when they should 
> have actually failed.
> We need to
> * Make MiniDFSCluster behave in a consistent way
> * Provide proper methods and documentation for the correct usage of 
> MiniDFSCluster
> * Fix the unit tests that will be broken after improving MiniDFSCluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998926#comment-13998926
 ] 

Kihwal Lee commented on HDFS-6397:
--

I've noticed that the dead node count does not include the nodes that are in 
dfs.include, but never contacted NN.  If the ones that contacted NN, but later 
died, do count toward the dead node count.  So live_node_count + 
dead_node_count can be less than total node count from dfs.include.

> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6406) Add capability for NFS gateway to reject connections from unprivileged ports

2014-05-16 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000372#comment-14000372
 ] 

Brandon Li commented on HDFS-6406:
--

ops, I missed the train. I will open a different JIRA if I notice anything to 
be improved later. Thanks.

> Add capability for NFS gateway to reject connections from unprivileged ports
> 
>
> Key: HDFS-6406
> URL: https://issues.apache.org/jira/browse/HDFS-6406
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 2.5.0
>
> Attachments: HDFS-6406.patch, HDFS-6406.patch
>
>
> Many NFS servers have the ability to only accept client connections 
> originating from privileged ports. It would be nice if the HDFS NFS gateway 
> had the same feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6406) Add capability for NFS gateway to reject connections from unprivileged ports

2014-05-16 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-6406:
-

Issue Type: New Feature  (was: Bug)

> Add capability for NFS gateway to reject connections from unprivileged ports
> 
>
> Key: HDFS-6406
> URL: https://issues.apache.org/jira/browse/HDFS-6406
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-6406.patch, HDFS-6406.patch
>
>
> Many NFS servers have the ability to only accept client connections 
> originating from privileged ports. It would be nice if the HDFS NFS gateway 
> had the same feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999247#comment-13999247
 ] 

Colin Patrick McCabe commented on HDFS-4913:


One small difference I notice between fuse_dfs and the FsShell is that the 
latter now pulls its trash configuration from the NameNode ("server-side 
trash"), but {{fuse_dfs}} still requires you to specify the {{use_trash}} 
option when starting FUSE.  I think this is probably OK, though.  Existing 
{{fuse_dfs}} configurations will continue to work, and I expect use of the 
trash to fade away gradually, as people use snapshots instead.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>  at

[jira] [Commented] (HDFS-6263) Remove DRFA.MaxBackupIndex config from log4j.properties

2014-05-16 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998612#comment-13998612
 ] 

Akira AJISAKA commented on HDFS-6263:
-

Hi [~abutala], thanks for the patch. +1 (non-binding).
bq. Is this by mistake? Should I just remove the redundant definitions? Please 
advise. Thanks!
I think it's a mistake. Let's file a JIRA and remove the redundant definitions. 
Thanks again for the report.

> Remove DRFA.MaxBackupIndex config from log4j.properties
> ---
>
> Key: HDFS-6263
> URL: https://issues.apache.org/jira/browse/HDFS-6263
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.4.0
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-6263.patch
>
>
> HDFS-side of HADOOP-10525.
> {code}
> # uncomment the next line to limit number of backup files
> # log4j.appender.ROLLINGFILE.MaxBackupIndex=10
> {code}
> In hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties, the 
> above lines should be removed because the appender (DRFA) doesn't support 
> MaxBackupIndex config.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6406) Add capability for NFS gateway to reject connections from unprivileged ports

2014-05-16 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000363#comment-14000363
 ] 

Brandon Li commented on HDFS-6406:
--

Please give me a bit time. I will review the patch late today or tomorrow.

> Add capability for NFS gateway to reject connections from unprivileged ports
> 
>
> Key: HDFS-6406
> URL: https://issues.apache.org/jira/browse/HDFS-6406
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.4.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 2.5.0
>
> Attachments: HDFS-6406.patch, HDFS-6406.patch
>
>
> Many NFS servers have the ability to only accept client connections 
> originating from privileged ports. It would be nice if the HDFS NFS gateway 
> had the same feature.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6400) Cannot execute "hdfs oiv_legacy"

2014-05-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6400:


Attachment: HDFS-6400.2.patch

Updated the usage of OfflineImageViewer.java

> Cannot execute "hdfs oiv_legacy"
> 
>
> Key: HDFS-6400
> URL: https://issues.apache.org/jira/browse/HDFS-6400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Critical
>  Labels: newbie
> Attachments: HDFS-6400.2.patch, HDFS-6400.patch
>
>
> HDFS-6293 added "hdfs oiv_legacy" command to view a legacy fsimage, but 
> cannot execute the command.
> In {{hdfs}},
> {code}
> elif [ "COMMAND" = "oiv_legacy" ] ; then
>   CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer
> {code}
> should be
> {code}
> elif [ "$COMMAND" = "oiv_legacy" ] ; then
>   CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6414) xattr modification operations are based on state of latest snapshot instead of current version of inode.

2014-05-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-6414.
-

Resolution: Fixed

> xattr modification operations are based on state of latest snapshot instead 
> of current version of inode.
> 
>
> Key: HDFS-6414
> URL: https://issues.apache.org/jira/browse/HDFS-6414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Chris Nauroth
>Assignee: Andrew Wang
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: hdfs-6414.1.patch, hdfs-6414.2.patch
>
>
> {{XAttrStorage#updateINodeXAttrs}} modifies the inode's {{XAttrFeature}} 
> based on reading its current state.  However, the logic for reading current 
> state is incorrect and may instead read the state of the latest snapshot.  If 
> xattrs have been changed after creation of that snapshot, then subsequent 
> xattr operations may yield incorrect results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6412) Interface audience and stability annotations missing from several new classes related to xattrs.

2014-05-16 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu reassigned HDFS-6412:


Assignee: Yi Liu

> Interface audience and stability annotations missing from several new classes 
> related to xattrs.
> 
>
> Key: HDFS-6412
> URL: https://issues.apache.org/jira/browse/HDFS-6412
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Chris Nauroth
>Assignee: Yi Liu
>Priority: Minor
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6412.1.patch
>
>
> Let's add the appropriate interface audience and stability annotations to the 
> following new classes related to xattrs: {{XAttr}}, {{NNConf}} and 
> {{XAttrHelper}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6378) NFS: when portmap/rpcbind is not available, NFS registration should timeout instead of hanging

2014-05-16 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6378:
-

Description: When portmap/rpcbind is not available, NFS could be stuck at 
registration. Instead, NFS gateway should shut down automatically with proper 
error message.  (was: When portmap/rpcbind is not available, NFS registration 
should timeout instead of hanging. Instead, NFS gateway should shut down 
automatically with proper error message.)

> NFS: when portmap/rpcbind is not available, NFS registration should timeout 
> instead of hanging 
> ---
>
> Key: HDFS-6378
> URL: https://issues.apache.org/jira/browse/HDFS-6378
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Brandon Li
>
> When portmap/rpcbind is not available, NFS could be stuck at registration. 
> Instead, NFS gateway should shut down automatically with proper error message.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6415) Missing null check in FSImageSerialization#writePermissionStatus()

2014-05-16 Thread Ted Yu (JIRA)
Ted Yu created HDFS-6415:


 Summary: Missing null check in 
FSImageSerialization#writePermissionStatus()
 Key: HDFS-6415
 URL: https://issues.apache.org/jira/browse/HDFS-6415
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
PermissionStatus.write(out, inode.getUserName(), inode.getGroupName(), p);
{code}
getUserName() / getGroupName() may return null.
null check should be added for these two calls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6423) Diskspace quota usage is wrongly updated when appending data from partial block

2014-05-16 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-6423:
---

 Summary: Diskspace quota usage is wrongly updated when appending 
data from partial block
 Key: HDFS-6423
 URL: https://issues.apache.org/jira/browse/HDFS-6423
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao


When appending new data to a file whose last block is a partial block, the 
diskspace quota usage is not correctly update. For example, suppose the block 
size is 1024 bytes, and a file has size 1536 bytes (1.5 blocks). If we then 
append another 1024 bytes to the file, the diskspace usage for this file will 
not be updated to (2560 * replication) as expected, but (2048 * replication).

The cause of the issue is that in FSNamesystem#commitOrCompleteLastBlock, we 
have 
{code}
// Adjust disk space consumption if required
final long diff = fileINode.getPreferredBlockSize() - 
commitBlock.getNumBytes();
if (diff > 0) {
  try {
String path = fileINode.getFullPathName();
dir.updateSpaceConsumed(path, 0, -diff*fileINode.getFileReplication());
  } catch (IOException e) {
LOG.warn("Unexpected exception while updating disk space.", e);
  }
}
{code}
This code assumes that the last block of the file has never been completed 
before, thus is always counted with the preferred block size in quota 
computation.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-05-16 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13992501#comment-13992501
 ] 

Haohui Mai commented on HDFS-6293:
--

bq. In the results above, the amount of memory on the machine is far larger 
than the image so everything happens in memory and seeks are free.

Reads on fsimage are mostly sequential, so it really doesn't matter whether the 
whole fsimage can fit into the memory or not.

bq. Can you run an experiment with a large fsimage (25G or so) with a 
representative fs hierarchy (not totally flat) and then generate DB and convert 
to LSR on a smaller machine (16G or so)?

The fsimage that I've experimented with originates from a production cluster. 
It was in the old format which requires a big machine to do convert it to a 
PB-based fsimage. I have to strip it down to fit it into my machine. Please see 
HDFS-5698 on how the image is generated. If you can send me your PB-based 
fsimage then I can experiment with it.

Since the image comes from a production cluster, the fs hierarchy is definitely 
not flat. I generated the DB by in a Java VM with 22G heap.


> Issues with OIV processing PB-based fsimages
> 
>
> Key: HDFS-6293
> URL: https://issues.apache.org/jira/browse/HDFS-6293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HDFS-6293.000.patch, HDFS-6293.001.patch, 
> HDFS-6293.002-save-deprecated-fsimage.patch, Heap Histogram.html
>
>
> There are issues with OIV when processing fsimages in protobuf. 
> Due to the internal layout changes introduced by the protobuf-based fsimage, 
> OIV consumes excessive amount of memory.  We have tested with a fsimage with 
> about 140M files/directories. The peak heap usage when processing this image 
> in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
> the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
> heap (max new size was 1GB).  It should be possible to process any image with 
> the default heap size of 1.5GB.
> Another issue is the complete change of format/content in OIV's XML output.  
> I also noticed that the secret manager section has no tokens while there were 
> unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
> they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6411) nfs-hdfs-gateway mount raises I/O error and hangs when a unauthorized user attempts to access it

2014-05-16 Thread Zhongyi Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000306#comment-14000306
 ] 

Zhongyi Xie commented on HDFS-6411:
---

the probable cause of the bug is that when an unauthorized user attempts to 
access a file/dir, the namenode throws an authorizationexception, which is 
eventually caught by nfs server's rpc handler. The rpc handler simply returns 
an ACCESS3Response object with empty file attributes and non NFS3_OK status 
which violates the protocol between nfs server and client, causing the client 
to hang

> nfs-hdfs-gateway mount raises I/O error and hangs when a unauthorized user 
> attempts to access it
> 
>
> Key: HDFS-6411
> URL: https://issues.apache.org/jira/browse/HDFS-6411
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Zhongyi Xie
>
> We the nfs-hdfs gateway to expose hdfs thru nfs.
> 0) login as root, run nfs-hdfs gateway as a user, say, nfsserver. 
> [root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
> backups  hive  mr-history  system  tmp  user
> 1) add a user nfs-test: adduser nfs-test(make sure that this user is not a 
> proxyuser of nfsserver
> 2) switch to test user: su - nfs-test
> 3) access hdfs nfs gateway
> [nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
> ls: cannot open directory /hdfs: Input/output error
> retry:
> [nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
> ls: cannot access /hdfs: Stale NFS file handle
> 4) switch back to root and access hdfs nfs gateway
> [nfs-test@zhongyi-test-cluster-desktop ~]$ exit
> logout
> [root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
> ls: cannot access /hdfs: Stale NFS file handle
> the nfsserver log indicates we hit an authorization error in the rpc handler; 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User: nfsserver is not allowed to impersonate nfs-test
> and NFS3ERR_IO is returned, which explains why we see input/output error. 
> One can catch the authorizationexception and return the correct error: 
> NFS3ERR_ACCES to fix the error message on the client side but that doesn't 
> seem to solve the mount hang issue though. When the mount hang happens, it 
> stops printing nfsserver log which makes it more difficult to figure out the 
> real cause of the hang. According to jstack and debugger, the nfsserver seems 
> to be waiting for client requests



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6409) Fix typo in log message about NameNode layout version upgrade.

2014-05-16 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-6409:
---

 Summary: Fix typo in log message about NameNode layout version 
upgrade.
 Key: HDFS-6409
 URL: https://issues.apache.org/jira/browse/HDFS-6409
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0, 3.0.0
Reporter: Chris Nauroth
Priority: Trivial


During startup, the NameNode logs a message if the existing metadata is using 
an old layout version.  This message contains a minor typo.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6409) Fix typo in log message about NameNode layout version upgrade.

2014-05-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6409:


Labels: newbie  (was: )

> Fix typo in log message about NameNode layout version upgrade.
> --
>
> Key: HDFS-6409
> URL: https://issues.apache.org/jira/browse/HDFS-6409
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Priority: Trivial
>  Labels: newbie
>
> During startup, the NameNode logs a message if the existing metadata is using 
> an old layout version.  This message contains a minor typo.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999486#comment-13999486
 ] 

Hadoop QA commented on HDFS-4913:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645087/HDFS-4913.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestBPOfferService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6910//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6910//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6910//console

This message is automatically generated.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.nam

[jira] [Updated] (HDFS-6413) xattr names erroneously handled as case-insensitive.

2014-05-16 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6413:
---

Attachment: (was: HDFS-6413.1.patch)

> xattr names erroneously handled as case-insensitive.
> 
>
> Key: HDFS-6413
> URL: https://issues.apache.org/jira/browse/HDFS-6413
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Chris Nauroth
>Assignee: Charles Lamb
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: HDFS-6413.1.patch
>
>
> Xattr names currently are handled as case-insensitive.  The names should be 
> case-sensitive instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6375) Listing extended attributes with the search permission

2014-05-16 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000266#comment-14000266
 ] 

Charles Lamb commented on HDFS-6375:


[~andrew.wang] and I had an offline discussion. He is concerned that 
permissions fallback does not provide any distinction between true null values 
(i.e. no value was set for the xattr when it was created) and "you don't have 
permission" null values. He proposes that we add a new xattr method in the api 
which will allow for scanning values, similar to 
http://linux.die.net/man/2/listxattr. If there are xattrs with null values then 
it will return those as long as the caller had access. If you only have scan 
access then the method would not even return the name.

[~cnauroth], [~hitliuyi], what are your thoughts?

> Listing extended attributes with the search permission
> --
>
> Key: HDFS-6375
> URL: https://issues.apache.org/jira/browse/HDFS-6375
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Andrew Wang
>Assignee: Charles Lamb
> Attachments: HDFS-6375.1.patch, HDFS-6375.2.patch, HDFS-6375.3.patch, 
> HDFS-6375.4.patch
>
>
> From the attr(5) manpage:
> {noformat}
>Users with search access to a file or directory may retrieve a list  of
>attribute names defined for that file or directory.
> {noformat}
> This is like doing {{getfattr}} without the {{-d}} flag, which we currently 
> don't support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6397:
-

 Priority: Critical  (was: Major)
 Target Version/s: 2.5.0, 2.4.1
Affects Version/s: 2.4.1

If we make it before 2.4.1 is cut, I want it to be in 2.4.1. Otherwise we will 
fix it in 2.5.0.

> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Critical
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5683) Better audit log messages for caching operations

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5683:
--

Assignee: Abhiraj Butala

> Better audit log messages for caching operations
> 
>
> Key: HDFS-5683
> URL: https://issues.apache.org/jira/browse/HDFS-5683
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Andrew Wang
>Assignee: Abhiraj Butala
>  Labels: caching
> Fix For: 2.5.0
>
> Attachments: HDFS-5683.001.patch
>
>
> Right now the caching audit logs aren't that useful, e.g.
> {noformat}
> 2013-12-18 14:14:54,423 INFO  FSNamesystem.audit 
> (FSNamesystem.java:logAuditMessage(7362)) - allowed=true ugi=andrew 
> (auth:SIMPLE)ip=/127.0.0.1   cmd=addCacheDirective   src=null
> dst=nullperm=null
> {noformat}
> It'd be good to include some more information when possible, like the path, 
> pool, id, etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6382) HDFS File/Directory TTL

2014-05-16 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999746#comment-13999746
 ] 

Zesheng Wu commented on HDFS-6382:
--

Thanks [~cnauroth], I agree with your example MapReduce scenario and the risk,  
but this risk can't be avoided even if we use outside tools. For example, we 
use a nightly cron job just like Andrew mentioned, imagine a MapReduce job gets 
submitted, we derive input splits from a file, and then the file is deleted by 
the cron job after input split calculation but before the map tasks start 
running and reading the blocks, the risk is the same. What I want to declare is 
that the TTL is just a convenient way to finish tasks like I described in the 
proposal, the users should learn how to use it and use it correctly, rather 
than use a complicated way and there's no obvious advantage.

> HDFS File/Directory TTL
> ---
>
> Key: HDFS-6382
> URL: https://issues.apache.org/jira/browse/HDFS-6382
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Affects Versions: 2.4.0
>Reporter: Zesheng Wu
>
> In production environment, we always have scenario like this, we want to 
> backup files on hdfs for some time and then hope to delete these files 
> automatically. For example, we keep only 1 day's logs on local disk due to 
> limited disk space, but we need to keep about 1 month's logs in order to 
> debug program bugs, so we keep all the logs on hdfs and delete logs which are 
> older than 1 month. This is a typical scenario of HDFS TTL. So here we 
> propose that hdfs can support TTL.
> Following are some details of this proposal:
> 1. HDFS can support TTL on a specified file or directory
> 2. If a TTL is set on a file, the file will be deleted automatically after 
> the TTL is expired
> 3. If a TTL is set on a directory, the child files and directories will be 
> deleted automatically after the TTL is expired
> 4. The child file/directory's TTL configuration should override its parent 
> directory's
> 5. A global configuration is needed to configure that whether the deleted 
> files/directories should go to the trash or not
> 6. A global configuration is needed to configure that whether a directory 
> with TTL should be deleted when it is emptied by TTL mechanism or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6400) Cannot execute "hdfs oiv_legacy"

2014-05-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998781#comment-13998781
 ] 

Kihwal Lee commented on HDFS-6400:
--

The precommit is not running, so I manually verified that there is no new javac 
or javadoc warning with the patch applied. Unit tests won't be affected since 
the change only involves content of existing string and a shell command that is 
not directly executed from tests.

> Cannot execute "hdfs oiv_legacy"
> 
>
> Key: HDFS-6400
> URL: https://issues.apache.org/jira/browse/HDFS-6400
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Critical
>  Labels: newbie
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6400.2.patch, HDFS-6400.patch
>
>
> HDFS-6293 added "hdfs oiv_legacy" command to view a legacy fsimage, but 
> cannot execute the command.
> In {{hdfs}},
> {code}
> elif [ "COMMAND" = "oiv_legacy" ] ; then
>   CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer
> {code}
> should be
> {code}
> elif [ "$COMMAND" = "oiv_legacy" ] ; then
>   CLASS=org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999012#comment-13999012
 ] 

Hudson commented on HDFS-6293:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5605/])
HDFS-6293. Issues with OIV processing PB-based fsimages. Contributed by Kihwal 
Lee. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1594439)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CacheManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointConf.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorageRetentionManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/StandbyCheckpointer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiff.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotFSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DelimitedImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/DepthCounter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FileDistributionVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageLoaderCurrent.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/ImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/IndentedImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/LsImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/NameDistributionVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TextWriterImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/XmlImageVisitor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenod

[jira] [Updated] (HDFS-6414) xattr modification operations are based on state of latest snapshot instead of current version of inode.

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6414:
--

Attachment: hdfs-6414.1.patch

Thanks for the nice find Chris, patch attached which essentially follows your 
suggestions.

I also found an additional issue surrounding quota handling when adding a 
feature not present in the snapshot, simple fix and test for that as well.

> xattr modification operations are based on state of latest snapshot instead 
> of current version of inode.
> 
>
> Key: HDFS-6414
> URL: https://issues.apache.org/jira/browse/HDFS-6414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Chris Nauroth
>Assignee: Andrew Wang
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: hdfs-6414.1.patch, hdfs-6414.2.patch
>
>
> {{XAttrStorage#updateINodeXAttrs}} modifies the inode's {{XAttrFeature}} 
> based on reading its current state.  However, the logic for reading current 
> state is incorrect and may instead read the state of the latest snapshot.  If 
> xattrs have been changed after creation of that snapshot, then subsequent 
> xattr operations may yield incorrect results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998926#comment-13998926
 ] 

Kihwal Lee edited comment on HDFS-6397 at 5/15/14 5:22 PM:
---

I've noticed that the dead node count does not include the nodes that are in 
dfs.include, but never contacted NN.  The ones that contacted NN then died do 
count toward the dead node count.  So live_node_count + dead_node_count can be 
less than total node count from dfs.include.


was (Author: kihwal):
I've noticed that the dead node count does not include the nodes that are in 
dfs.include, but never contacted NN.  If the ones that contacted NN, but later 
died, do count toward the dead node count.  So live_node_count + 
dead_node_count can be less than total node count from dfs.include.

> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999025#comment-13999025
 ] 

Hudson commented on HDFS-6326:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5605/])
HDFS-6326. WebHdfs ACL compatibility is broken. Contributed by Chris Nauroth. 
(cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1594743)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsAclPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/AclTestHelpers.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestAclWithSnapshot.java


> WebHdfs ACL compatibility is broken
> ---
>
> Key: HDFS-6326
> URL: https://issues.apache.org/jira/browse/HDFS-6326
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Daryn Sharp
>Assignee: Chris Nauroth
>Priority: Blocker
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6326-branch-2.4.patch, HDFS-6326.1.patch, 
> HDFS-6326.2.patch, HDFS-6326.3.patch, HDFS-6326.4.patch, HDFS-6326.5.patch, 
> HDFS-6326.6.patch, aclfsperm.example
>
>
> 2.4 ACL support is completely incompatible with <2.4 webhdfs servers.  The NN 
> throws an {{IllegalArgumentException}} exception.
> {code}
> hadoop fs -ls webhdfs://nn/
> Found 21 items
> ls: Invalid value for webhdfs parameter "op": No enum constant 
> org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
> [... 20 more times...]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6381) Fix a typo in INodeReference.java

2014-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998203#comment-13998203
 ] 

Hudson commented on HDFS-6381:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1753 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1753/])
HDFS-6381. Fix a typo in INodeReference.java. Contributed by Binglin Chang. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1594447)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java


> Fix a typo in INodeReference.java
> -
>
> Key: HDFS-6381
> URL: https://issues.apache.org/jira/browse/HDFS-6381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Trivial
> Fix For: 2.5.0
>
> Attachments: HDFS-6381.v1.patch
>
>
> hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
> {code}
>   * For example,
> - * (1) Support we have /abc/foo, say the inode of foo is 
> inode(id=1000,name=foo)
> + * (1) Suppose we have /abc/foo, say the inode of foo is 
> inode(id=1000,name=foo)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6414) xattr modification operations are based on state of latest snapshot instead of current version of inode.

2014-05-16 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999500#comment-13999500
 ] 

Yi Liu commented on HDFS-6414:
--

Thanks [~cnauroth], it's nice.

> xattr modification operations are based on state of latest snapshot instead 
> of current version of inode.
> 
>
> Key: HDFS-6414
> URL: https://issues.apache.org/jira/browse/HDFS-6414
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Chris Nauroth
>Assignee: Andrew Wang
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: hdfs-6414.1.patch, hdfs-6414.2.patch
>
>
> {{XAttrStorage#updateINodeXAttrs}} modifies the inode's {{XAttrFeature}} 
> based on reading its current state.  However, the logic for reading current 
> state is incorrect and may instead read the state of the latest snapshot.  If 
> xattrs have been changed after creation of that snapshot, then subsequent 
> xattr operations may yield incorrect results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6410) DFSClient unwraps AclException in xattr methods, but those methods cannot throw AclException.

2014-05-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-6410:
-

Assignee: Andrew Wang

> DFSClient unwraps AclException in xattr methods, but those methods cannot 
> throw AclException.
> -
>
> Key: HDFS-6410
> URL: https://issues.apache.org/jira/browse/HDFS-6410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: Chris Nauroth
>Assignee: Andrew Wang
>Priority: Minor
> Fix For: HDFS XAttrs (HDFS-2006)
>
> Attachments: hdfs-6410.1.patch
>
>
> The various xattr methods in {{DFSClient}} specify {{AclException}} in the 
> call to {{RemoteException#unwrapRemoteException}}.  It's impossible for the 
> xattr APIs to throw {{AclException}}.  Since encountering {{AclException}} 
> would be an unexpected condition, we should not unwrap it so that instead we 
> maintain the full stack trace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6397) NN shows inconsistent value in deadnode count

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999030#comment-13999030
 ] 

Hadoop QA commented on HDFS-6397:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12644945/HDFS-6397.1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6905//console

This message is automatically generated.

> NN shows inconsistent value in deadnode count 
> --
>
> Key: HDFS-6397
> URL: https://issues.apache.org/jira/browse/HDFS-6397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>Priority: Critical
> Attachments: HDFS-6397.1.patch, HDFS-6397.2.patch
>
>
> Context: 
> When NN is started , without any live datanode but there are nodes in the 
> dfs.includes, NN shows the deadcount as '0'.
> There are two inconsistencies:
> 1. If you click on deadnode links (which shows the count is 0), it will 
> display the list of deadnodes correctly.
> 2.  hadoop 1.x used  to display the count correctly.
> The following snippets of JMX response will explain it further:
> Look at the value of "NumDeadDataNodes" 
> {noformat}
>  {
> "name" : "Hadoop:service=NameNode,name=FSNamesystemState",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> "CapacityTotal" : 0,
> "CapacityUsed" : 0,
> ... 
>"NumLiveDataNodes" : 0,
> "NumDeadDataNodes" : 0,
> "NumDecomLiveDataNodes" : 0,
> "NumDecomDeadDataNodes" : 0,
> "NumDecommissioningDataNodes" : 0,
> "NumStaleDataNodes" : 0
>   },
> {noformat}
> Look at " "DeadNodes"".
> {noformat}
> {
> "name" : "Hadoop:service=NameNode,name=NameNodeInfo",
> "modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
> 
> 
> "TotalBlocks" : 70,
> "TotalFiles" : 129,
> "NumberOfMissingBlocks" : 0,
> "LiveNodes" : "{}",
> "DeadNodes" : 
> "{\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.X.XX:71\"},\".linkedin.com\":{\"lastContact\":1400037397,\"decommissioned\":false,\"xferaddr\":\"172.XX.XX.XX:71\"}}",
> "DecomNodes" : "{}",
>.
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-05-16 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6293:
-

Status: Open  (was: Patch Available)

> Issues with OIV processing PB-based fsimages
> 
>
> Key: HDFS-6293
> URL: https://issues.apache.org/jira/browse/HDFS-6293
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Haohui Mai
>Priority: Blocker
> Attachments: HDFS-6293.000.patch, HDFS-6293.001.patch, 
> HDFS-6293.002-save-deprecated-fsimage.patch, Heap Histogram.html
>
>
> There are issues with OIV when processing fsimages in protobuf. 
> Due to the internal layout changes introduced by the protobuf-based fsimage, 
> OIV consumes excessive amount of memory.  We have tested with a fsimage with 
> about 140M files/directories. The peak heap usage when processing this image 
> in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
> the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
> heap (max new size was 1GB).  It should be possible to process any image with 
> the default heap size of 1.5GB.
> Another issue is the complete change of format/content in OIV's XML output.  
> I also noticed that the secret manager section has no tokens while there were 
> unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
> they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-2006) ability to support storing extended attributes per file

2014-05-16 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-2006:
--

Attachment: HDFS-2006-Merge-2.patch

Attached another merge patch which includes the fixes for HDFS-6413, HDFS-6414, 
HDFS-6410, HDFS-6412.


> ability to support storing extended attributes per file
> ---
>
> Key: HDFS-2006
> URL: https://issues.apache.org/jira/browse/HDFS-2006
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: HDFS XAttrs (HDFS-2006)
>Reporter: dhruba borthakur
>Assignee: Yi Liu
> Attachments: ExtendedAttributes.html, HDFS-2006-Merge-1.patch, 
> HDFS-2006-Merge-2.patch, HDFS-XAttrs-Design-1.pdf, HDFS-XAttrs-Design-2.pdf, 
> HDFS-XAttrs-Design-3.pdf, Test-Plan-for-Extended-Attributes-1.pdf, 
> xattrs.1.patch, xattrs.patch
>
>
> It would be nice if HDFS provides a feature to store extended attributes for 
> files, similar to the one described here: 
> http://en.wikipedia.org/wiki/Extended_file_attributes. 
> The challenge is that it has to be done in such a way that a site not using 
> this feature does not waste precious memory resources in the namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6370) Web UI fails to display in intranet under IE

2014-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999013#comment-13999013
 ] 

Hudson commented on HDFS-6370:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5605/])
HDFS-6370. Web UI fails to display in intranet under IE. Contributed by Haohui 
Mai. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1594362)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html


> Web UI fails to display in intranet under IE
> 
>
> Key: HDFS-6370
> URL: https://issues.apache.org/jira/browse/HDFS-6370
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, journal-node, namenode
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6370.000.patch
>
>
> When IE renders the web UI of a cluster than runs in the intranet, it forces 
> the compatibility mode to be turned on which causes the UI fails to render 
> correctly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6411) /hdfs mount raises I/O error and hangs when a unauthorized user attempts to access it

2014-05-16 Thread Zhongyi Xie (JIRA)
Zhongyi Xie created HDFS-6411:
-

 Summary: /hdfs mount raises I/O error and hangs when a 
unauthorized user attempts to access it
 Key: HDFS-6411
 URL: https://issues.apache.org/jira/browse/HDFS-6411
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Zhongyi Xie


0) login as root, make sure service nfs-hdfs and nfs-hdfs-client are running:
[root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
backups  hive  mr-history  system  tmp  user
1) add a user nfs-test: adduser nfs-test
2) switch to test user: su - nfs-test
3) access hdfs nfs gateway
[nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
ls: cannot open directory /hdfs: Input/output error
retry:
[nfs-test@zhongyi-test-cluster-desktop ~]$ ls /hdfs
ls: cannot access /hdfs: Stale NFS file handle
4) switch back to root and access hdfs nfs gateway
[nfs-test@zhongyi-test-cluster-desktop ~]$ exit
logout
[root@zhongyi-test-cluster-desktop hdfs]# ls /hdfs
ls: cannot access /hdfs: Stale NFS file handle


the nfsserver log indicates we hit an authorization error in the rpc handler; 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 User: nfsserver is not allowed to impersonate nfs-test
and NFS3ERR_IO is returned, which explains why we see input/output error. 
One can catch the authorizationexception and return the correct error: 
NFS3ERR_ACCES to fix the error message on the client side but that doesn't seem 
to solve the mount hang issue though. When the mount hang happens, it stops 
printing nfsserver log which makes it more difficult to figure out the real 
cause of the hang. According to jstack and debugger, the nfsserver seems to be 
waiting for client requests



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   3   >