[jira] [Updated] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4762:
-

Attachment: HDFS-4762.patch.6

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697557#comment-13697557
 ] 

Brandon Li commented on HDFS-4762:
--

{quote}OffsetRange.compareTo(..) only compares min{quote}
Thanks!
{quote} The second if-statement should be if (!dumpFile.createNewFile()). 
Also, createNewFile() ensures that the file does not exist. So the first 
if-statement may not be needed.{quote}
I removed if (!dumpFile.createNewFile()) since dumpOut = new 
FileOutputStream(dumpFile); already created the file.
{quote}Is it okay to return when there is an exception? Should it re-throws the 
exception?{quote}
Any error will disable file dump. Even it re-throws the exception, the caller 
so far doesn't have extra handling to do.
{quote} WriteManager.shutdownAsyncDataService() is not used.{quote}
I will create a JIRA to add shutdown procedures which will use this method.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697562#comment-13697562
 ] 

Hadoop QA commented on HDFS-4762:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590376/HDFS-4762.patch.6
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4585//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4585//console

This message is automatically generated.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4372) Track NameNode startup progress

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697571#comment-13697571
 ] 

Hadoop QA commented on HDFS-4372:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590366/HDFS-4372.4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4584//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4584//console

This message is automatically generated.

 Track NameNode startup progress
 ---

 Key: HDFS-4372
 URL: https://issues.apache.org/jira/browse/HDFS-4372
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4372.1.patch, HDFS-4372.2.patch, HDFS-4372.3.patch, 
 HDFS-4372.4.patch


 Track detailed progress information about the steps of NameNode startup to 
 enable display to users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4504) DFSOutputStream#close doesn't always release resources (such as leases)

2013-07-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697624#comment-13697624
 ] 

Uma Maheswara Rao G commented on HDFS-4504:
---

Thanks Colin, for working on this issue. 
Just to summarize:
Per my understanding here there are 2 issues, 1.leaving stale refernces when 
there is failures in close call. 2. For long lived client, if cmpleFile fails, 
no one will recover is as client will renewLease

for #1, fix would little straight forward.
for #2, Kihwal brought some cases above.

{quote}
•Extend complete() by adding an optional boolean arg, force. Things will stay 
compatible. If a new client is talking to an old NN, the file may not get 
completed right away, but this is no worse than current behavior. The client 
(lease renewer) can keep trying periodically. Probably less often than the 
lease renewal. We may only allow this when lastBlock is present, since the 
acked block length will reduce the risk of truncating valid data.
{quote}
Since the current close call already closes streamer, where we maintain this 
last block? you mean we will introduce another structure for it and check 
periodially in renewer/anyother thread?

(or) How about checking the filesBeingWritten file state. If the 
FileBeingWritten state is closed from Clinet perspective but 
completFile/flushbuffer failed. So, we will not remove that references staright 
away from DFsClient. In this case, Renewer will check such files(closed) and 
check real file status from NN. If the file closed from NN(isFileClosed added 
in trunk I guess) , then remove from the fileBeingWritten list directly. 
Otherwise make a call ourselves recoverLease (as we know no one is going to do 
recover for such files).


 DFSOutputStream#close doesn't always release resources (such as leases)
 ---

 Key: HDFS-4504
 URL: https://issues.apache.org/jira/browse/HDFS-4504
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4504.001.patch, HDFS-4504.002.patch


 {{DFSOutputStream#close}} can throw an {{IOException}} in some cases.  One 
 example is if there is a pipeline error and then pipeline recovery fails.  
 Unfortunately, in this case, some of the resources used by the 
 {{DFSOutputStream}} are leaked.  One particularly important resource is file 
 leases.
 So it's possible for a long-lived HDFS client, such as Flume, to write many 
 blocks to a file, but then fail to close it.  Unfortunately, the 
 {{LeaseRenewerThread}} inside the client will continue to renew the lease for 
 the undead file.  Future attempts to close the file will just rethrow the 
 previous exception, and no progress can be made by the client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697683#comment-13697683
 ] 

Hudson commented on HDFS-4888:
--

Integrated in Hadoop-Yarn-trunk #258 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/258/])
HDFS-4888. Refactor and fix FSNamesystem.getTurnOffTip. Contributed by Ravi 
Prakash. (Revision 1498665)

 Result = FAILURE
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498665
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java


 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HDFS-4888.patch, HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4837) Allow DFSAdmin to run when HDFS is not the default file system

2013-07-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697715#comment-13697715
 ] 

Uma Maheswara Rao G commented on HDFS-4837:
---

Whether this is working with HA namenodes? if nameServiceAddress available, 
directly it will create connection right? Otherwise anyway this looks to be 
same as FileSystem.getDefaultUri(conf) and create address. I think having 
default behaviour only more correct I feel, As your installation default FS 
says something else. So, if you want to pass uri explicitely also, you can pass 
your specific configurations via command line. This is service address used by 
DN, backup node...etc. Could you please have a test with HA namenodes and 
confirm befor providing revised patch? ALso take care of calling getDFS in 
setBalancerBandwidth instead of getFS.


Regards,
Uma

 Allow DFSAdmin to run when HDFS is not the default file system
 --

 Key: HDFS-4837
 URL: https://issues.apache.org/jira/browse/HDFS-4837
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Mostafa Elhemali
Assignee: Mostafa Elhemali
 Attachments: HDFS-4837.patch


 When Hadoop is running a different default file system than HDFS, but still 
 have HDFS namenode running, we are unable to run dfsadmin commands.
 I suggest that DFSAdmin use the same mechanism as NameNode does today to get 
 its address: look at dfs.namenode.rpc-address, and if not set fallback on 
 getting it from the default file system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-1539) prevent data loss when a cluster suffers a power loss

2013-07-02 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697735#comment-13697735
 ] 

Dave Latham commented on HDFS-1539:
---

Does anyone have any performance numbers for enabling this?  Or, does anyone 
just have some experience running this on significant workloads in production?  
(Especially HBase?)

 prevent data loss when a cluster suffers a power loss
 -

 Key: HDFS-1539
 URL: https://issues.apache.org/jira/browse/HDFS-1539
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, hdfs-client, namenode
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.23.0, 1.1.1

 Attachments: syncOnClose1.txt, syncOnClose2_b-1.txt, syncOnClose2.txt


 we have seen an instance where a external outage caused many datanodes to 
 reboot at around the same time.  This resulted in many corrupted blocks. 
 These were recently written blocks; the current implementation of HDFS 
 Datanodes do not sync the data of a block file when the block is closed.
 1. Have a cluster-wide config setting that causes the datanode to sync a 
 block file when a block is finalized.
 2. Introduce a new parameter to the FileSystem.create() to trigger the new 
 behaviour, i.e. cause the datanode to sync a block-file when it is finalized.
 3. Implement the FSDataOutputStream.hsync() to cause all data written to the 
 specified file to be written to stable storage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4882) Namenode LeaseManager checkLeases() runs into infinite loop

2013-07-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697743#comment-13697743
 ] 

Uma Maheswara Rao G commented on HDFS-4882:
---

{quote}
When there's DN in the pipeline down and the pipeline stage is PIPELINE_CLOSE, 
the client triggers the data replication, do not wait the NN to do this(NN 
needs the file be finalized to do the replication, but finalized need all the 
blocks have at least dfs.namenode.replication.min(=2) replicas, these two 
conditions are contradicting).
{quote}
What you mean by 'the client triggers the data replication'?
File will not be finalized until it reached at least min replication blocks. 
But block replication can be started once it is committed to DN.(here each DN 
will finalize its block and report NN). On reaching min replication NN will 
complete that block. If all Blocks in NN are in complete state then file can be 
closed normally.

 Namenode LeaseManager checkLeases() runs into infinite loop
 ---

 Key: HDFS-4882
 URL: https://issues.apache.org/jira/browse/HDFS-4882
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 2.0.0-alpha
Reporter: Zesheng Wu
 Attachments: 4882.1.patch, 4882.patch, 4882.patch


 Scenario:
 1. cluster with 4 DNs
 2. the size of the file to be written is a little more than one block
 3. write the first block to 3 DNs, DN1-DN2-DN3
 4. all the data packets of first block is successfully acked and the client 
 sets the pipeline stage to PIPELINE_CLOSE, but the last packet isn't sent out
 5. DN2 and DN3 are down
 6. client recovers the pipeline, but no new DN is added to the pipeline 
 because of the current pipeline stage is PIPELINE_CLOSE
 7. client continuously writes the last block, and try to close the file after 
 written all the data
 8. NN finds that the penultimate block doesn't has enough replica(our 
 dfs.namenode.replication.min=2), and the client's close runs into indefinite 
 loop(HDFS-2936), and at the same time, NN makes the last block's state to 
 COMPLETE
 9. shutdown the client
 10. the file's lease exceeds hard limit
 11. LeaseManager realizes that and begin to do lease recovery by call 
 fsnamesystem.internalReleaseLease()
 12. but the last block's state is COMPLETE, and this triggers lease manager's 
 infinite loop and prints massive logs like this:
 {noformat}
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.LeaseManager: Lease [Lease.  Holder: 
 DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1] has expired hard
  limit
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering lease=[Lease. 
  Holder: DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1], src=
 /user/h_wuzesheng/test.dat
 2013-06-05,17:42:25,695 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
 NameSystem.internalReleaseLease: File = /user/h_wuzesheng/test.dat, block 
 blk_-7028017402720175688_1202597,
 lastBLockState=COMPLETE
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.LeaseManager: Started block recovery 
 for file /user/h_wuzesheng/test.dat lease [Lease.  Holder: DFSClient_NONM
 APREDUCE_-1252656407_1, pendingcreates: 1]
 {noformat}
 (the 3rd line log is a debug log added by us)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697760#comment-13697760
 ] 

Hudson commented on HDFS-4888:
--

Integrated in Hadoop-Hdfs-trunk #1448 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1448/])
HDFS-4888. Refactor and fix FSNamesystem.getTurnOffTip. Contributed by Ravi 
Prakash. (Revision 1498665)

 Result = FAILURE
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498665
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java


 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HDFS-4888.patch, HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697775#comment-13697775
 ] 

Hudson commented on HDFS-4888:
--

Integrated in Hadoop-Mapreduce-trunk #1475 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1475/])
HDFS-4888. Refactor and fix FSNamesystem.getTurnOffTip. Contributed by Ravi 
Prakash. (Revision 1498665)

 Result = FAILURE
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498665
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHASafeMode.java


 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HDFS-4888.patch, HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4882) Namenode LeaseManager checkLeases() runs into infinite loop

2013-07-02 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697784#comment-13697784
 ] 

Zesheng Wu commented on HDFS-4882:
--

1. {quote}What you mean by 'the client triggers the data replication'?{quote}
I mean let the client goto the following replica transfer process:
{code}
  //transfer replica
  final DatanodeInfo src = d == 0? nodes[1]: nodes[d - 1];
  final DatanodeInfo[] targets = {nodes[d]};
  transfer(src, targets, lb.getBlockToken());
{code}

 Namenode LeaseManager checkLeases() runs into infinite loop
 ---

 Key: HDFS-4882
 URL: https://issues.apache.org/jira/browse/HDFS-4882
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 2.0.0-alpha
Reporter: Zesheng Wu
 Attachments: 4882.1.patch, 4882.patch, 4882.patch


 Scenario:
 1. cluster with 4 DNs
 2. the size of the file to be written is a little more than one block
 3. write the first block to 3 DNs, DN1-DN2-DN3
 4. all the data packets of first block is successfully acked and the client 
 sets the pipeline stage to PIPELINE_CLOSE, but the last packet isn't sent out
 5. DN2 and DN3 are down
 6. client recovers the pipeline, but no new DN is added to the pipeline 
 because of the current pipeline stage is PIPELINE_CLOSE
 7. client continuously writes the last block, and try to close the file after 
 written all the data
 8. NN finds that the penultimate block doesn't has enough replica(our 
 dfs.namenode.replication.min=2), and the client's close runs into indefinite 
 loop(HDFS-2936), and at the same time, NN makes the last block's state to 
 COMPLETE
 9. shutdown the client
 10. the file's lease exceeds hard limit
 11. LeaseManager realizes that and begin to do lease recovery by call 
 fsnamesystem.internalReleaseLease()
 12. but the last block's state is COMPLETE, and this triggers lease manager's 
 infinite loop and prints massive logs like this:
 {noformat}
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.LeaseManager: Lease [Lease.  Holder: 
 DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1] has expired hard
  limit
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering lease=[Lease. 
  Holder: DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1], src=
 /user/h_wuzesheng/test.dat
 2013-06-05,17:42:25,695 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
 NameSystem.internalReleaseLease: File = /user/h_wuzesheng/test.dat, block 
 blk_-7028017402720175688_1202597,
 lastBLockState=COMPLETE
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.LeaseManager: Started block recovery 
 for file /user/h_wuzesheng/test.dat lease [Lease.  Holder: DFSClient_NONM
 APREDUCE_-1252656407_1, pendingcreates: 1]
 {noformat}
 (the 3rd line log is a debug log added by us)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4882) Namenode LeaseManager checkLeases() runs into infinite loop

2013-07-02 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697787#comment-13697787
 ] 

Zesheng Wu commented on HDFS-4882:
--

{quote}
File will not be finalized until it reached at least min replication blocks. 
But block replication can be started once it is committed to DN.(here each DN 
will finalize its block and report NN). On reaching min replication NN will 
complete that block. If all Blocks in NN are in complete state then file can be 
closed normally.
{quote}
Yes, I think you are right. But the block replication is not committed to DN, 
so is not started.

 Namenode LeaseManager checkLeases() runs into infinite loop
 ---

 Key: HDFS-4882
 URL: https://issues.apache.org/jira/browse/HDFS-4882
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 2.0.0-alpha
Reporter: Zesheng Wu
 Attachments: 4882.1.patch, 4882.patch, 4882.patch


 Scenario:
 1. cluster with 4 DNs
 2. the size of the file to be written is a little more than one block
 3. write the first block to 3 DNs, DN1-DN2-DN3
 4. all the data packets of first block is successfully acked and the client 
 sets the pipeline stage to PIPELINE_CLOSE, but the last packet isn't sent out
 5. DN2 and DN3 are down
 6. client recovers the pipeline, but no new DN is added to the pipeline 
 because of the current pipeline stage is PIPELINE_CLOSE
 7. client continuously writes the last block, and try to close the file after 
 written all the data
 8. NN finds that the penultimate block doesn't has enough replica(our 
 dfs.namenode.replication.min=2), and the client's close runs into indefinite 
 loop(HDFS-2936), and at the same time, NN makes the last block's state to 
 COMPLETE
 9. shutdown the client
 10. the file's lease exceeds hard limit
 11. LeaseManager realizes that and begin to do lease recovery by call 
 fsnamesystem.internalReleaseLease()
 12. but the last block's state is COMPLETE, and this triggers lease manager's 
 infinite loop and prints massive logs like this:
 {noformat}
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.LeaseManager: Lease [Lease.  Holder: 
 DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1] has expired hard
  limit
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering lease=[Lease. 
  Holder: DFSClient_NONMAPREDUCE_-1252656407_1, pendingcreates: 1], src=
 /user/h_wuzesheng/test.dat
 2013-06-05,17:42:25,695 WARN org.apache.hadoop.hdfs.StateChange: DIR* 
 NameSystem.internalReleaseLease: File = /user/h_wuzesheng/test.dat, block 
 blk_-7028017402720175688_1202597,
 lastBLockState=COMPLETE
 2013-06-05,17:42:25,695 INFO 
 org.apache.hadoop.hdfs.server.namenode.LeaseManager: Started block recovery 
 for file /user/h_wuzesheng/test.dat lease [Lease.  Holder: DFSClient_NONM
 APREDUCE_-1252656407_1, pendingcreates: 1]
 {noformat}
 (the 3rd line log is a debug log added by us)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4887) TestNNThroughputBenchmark exits abruptly

2013-07-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697789#comment-13697789
 ] 

Daryn Sharp commented on HDFS-4887:
---

There are methods called {{disableSystem(Exit|Halt)}} and 
{{(get|reset)First(Exit|Halt)Exception}} which appear to be designed for 
testing purposes.  Could these be used in lieu of another testing specific 
change?

 TestNNThroughputBenchmark exits abruptly
 

 Key: HDFS-4887
 URL: https://issues.apache.org/jira/browse/HDFS-4887
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: benchmarks, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-4887.patch, HDFS-4887.patch


 After HDFS-4840, TestNNThroughputBenchmark exits in the middle. This is 
 because ReplicationMonitor is being stopped while NN is still running. 
 This is only valid during testing. In normal cases, ReplicationMonitor thread 
 runs all the time once started. In standby or safemode, it just skips 
 calculating DN work. I think NNThroughputBenchmark needs to use ExitUtil to 
 prevent termination, rather than modifying ReplicationMonitor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4887) TestNNThroughputBenchmark exits abruptly

2013-07-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697792#comment-13697792
 ] 

Kihwal Lee commented on HDFS-4887:
--

I first tried using ExitUtil, but this test case and potentially others want to 
stop ReplicationMonitor thread and the rest of the system to be up. If NN 
exits, BlockManager is torn down and any subsequent checks by tests will race.

 TestNNThroughputBenchmark exits abruptly
 

 Key: HDFS-4887
 URL: https://issues.apache.org/jira/browse/HDFS-4887
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: benchmarks, test
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HDFS-4887.patch, HDFS-4887.patch


 After HDFS-4840, TestNNThroughputBenchmark exits in the middle. This is 
 because ReplicationMonitor is being stopped while NN is still running. 
 This is only valid during testing. In normal cases, ReplicationMonitor thread 
 runs all the time once started. In standby or safemode, it just skips 
 calculating DN work. I think NNThroughputBenchmark needs to use ExitUtil to 
 prevent termination, rather than modifying ReplicationMonitor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Attachment: (was: 0002-HDFS-4860.patch)

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
 Attachments: HDFS-4860.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Attachment: HDFS-4860.diff

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
 Attachments: HDFS-4860.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Status: Patch Available  (was: Open)

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha, 0.20.204.1, 3.0.0, 2.1.0-beta
Reporter: Trevor Lorimer
 Attachments: HDFS-4860.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Status: Open  (was: Patch Available)

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha, 0.20.204.1, 3.0.0, 2.1.0-beta
Reporter: Trevor Lorimer
 Attachments: HDFS-4860.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697797#comment-13697797
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4762:
--

 I removed if (!dumpFile.createNewFile()) ...

The new patch still has if (dumpFile.createNewFile()).

+1 on the patch once the if-statement is removed.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4762:
-

Attachment: HDFS-4762.patch.7

{quote}The new patch still has if (dumpFile.createNewFile()).{quote}
Sorry. Fixed in the new patch. Thanks!

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697828#comment-13697828
 ] 

Hadoop QA commented on HDFS-4762:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590437/HDFS-4762.patch.7
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4587//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4587//console

This message is automatically generated.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4762:
-

 Component/s: nfs
Hadoop Flags: Reviewed

+1 the latest patch looks good.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4943) WebHdfsFileSystem does not work when original file path has encoded chars

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4943:
-

Fix Version/s: (was: 2.1.0-beta)
 Assignee: Jerry He
 Hadoop Flags: Reviewed

+1 patch looks good.

 WebHdfsFileSystem does not work when original file path has encoded chars 
 --

 Key: HDFS-4943
 URL: https://issues.apache.org/jira/browse/HDFS-4943
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 1.2.0, 1.1.2, 2.0.4-alpha
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Attachments: HDFS-4943-trunk.patch, HDFS-4943-trunk-v2.patch


 In HBase, the WAL (hlog) file name on hdfs is URL encoded. For example, 
 hdtest010%2C60020%2C1371000602151.1371058984668
 When we use webhdfs client to access the hlog file via httpfs, it does not 
 work in this case.
 $ hadoop fs -ls hdfs:///user/biadmin/hbase_hlogs  
  
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls 
 hdfs:///user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $
 $ hadoop fs -ls 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 13/06/27 18:36:08 DEBUG web.WebHdfsFileSystem: Original exception is
 org.apache.hadoop.ipc.RemoteException:java.io.FileNotFoundException:File does 
 not exist: 
 /user/biadmin/hbase_hlogs/hdtest010,60020,1371000602151.1371058984668
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:114)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:299)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:104)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.getResponse(WebHdfsFileSystem.java:641)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.run(WebHdfsFileSystem.java:538)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:468)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:662)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:673)
 at org.apache.hadoop.fs.FileSystem.getFileStatus(FileSystem.java:1365)
 at 
 org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1048)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:987)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:965)
 at org.apache.hadoop.fs.FsShell.ls(FsShell.java:573)
 at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1571)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:1789)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)
 ls: Cannot access 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668:
  No such file or directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4943) WebHdfsFileSystem does not work when original file path has encoded chars

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4943:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 WebHdfsFileSystem does not work when original file path has encoded chars 
 --

 Key: HDFS-4943
 URL: https://issues.apache.org/jira/browse/HDFS-4943
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 1.2.0, 1.1.2, 2.0.4-alpha
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HDFS-4943-trunk.patch, HDFS-4943-trunk-v2.patch


 In HBase, the WAL (hlog) file name on hdfs is URL encoded. For example, 
 hdtest010%2C60020%2C1371000602151.1371058984668
 When we use webhdfs client to access the hlog file via httpfs, it does not 
 work in this case.
 $ hadoop fs -ls hdfs:///user/biadmin/hbase_hlogs  
  
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls 
 hdfs:///user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $
 $ hadoop fs -ls 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 13/06/27 18:36:08 DEBUG web.WebHdfsFileSystem: Original exception is
 org.apache.hadoop.ipc.RemoteException:java.io.FileNotFoundException:File does 
 not exist: 
 /user/biadmin/hbase_hlogs/hdtest010,60020,1371000602151.1371058984668
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:114)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:299)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:104)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.getResponse(WebHdfsFileSystem.java:641)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.run(WebHdfsFileSystem.java:538)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:468)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:662)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:673)
 at org.apache.hadoop.fs.FileSystem.getFileStatus(FileSystem.java:1365)
 at 
 org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1048)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:987)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:965)
 at org.apache.hadoop.fs.FsShell.ls(FsShell.java:573)
 at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1571)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:1789)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)
 ls: Cannot access 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668:
  No such file or directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4943) WebHdfsFileSystem does not work when original file path has encoded chars

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4943:
-

Fix Version/s: 2.1.0-beta

I have committed this.  Thanks, Jerry!

 WebHdfsFileSystem does not work when original file path has encoded chars 
 --

 Key: HDFS-4943
 URL: https://issues.apache.org/jira/browse/HDFS-4943
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 1.2.0, 1.1.2, 2.0.4-alpha
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HDFS-4943-trunk.patch, HDFS-4943-trunk-v2.patch


 In HBase, the WAL (hlog) file name on hdfs is URL encoded. For example, 
 hdtest010%2C60020%2C1371000602151.1371058984668
 When we use webhdfs client to access the hlog file via httpfs, it does not 
 work in this case.
 $ hadoop fs -ls hdfs:///user/biadmin/hbase_hlogs  
  
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls 
 hdfs:///user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $
 $ hadoop fs -ls 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 13/06/27 18:36:08 DEBUG web.WebHdfsFileSystem: Original exception is
 org.apache.hadoop.ipc.RemoteException:java.io.FileNotFoundException:File does 
 not exist: 
 /user/biadmin/hbase_hlogs/hdtest010,60020,1371000602151.1371058984668
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:114)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:299)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:104)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.getResponse(WebHdfsFileSystem.java:641)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.run(WebHdfsFileSystem.java:538)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:468)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:662)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:673)
 at org.apache.hadoop.fs.FileSystem.getFileStatus(FileSystem.java:1365)
 at 
 org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1048)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:987)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:965)
 at org.apache.hadoop.fs.FsShell.ls(FsShell.java:573)
 at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1571)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:1789)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)
 ls: Cannot access 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668:
  No such file or directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4943) WebHdfsFileSystem does not work when original file path has encoded chars

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697859#comment-13697859
 ] 

Hudson commented on HDFS-4943:
--

Integrated in Hadoop-trunk-Commit #4029 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4029/])
HDFS-4943. WebHdfsFileSystem does not work when original file path has 
encoded chars.  Contributed by Jerry He (Revision 1498962)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1498962
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 WebHdfsFileSystem does not work when original file path has encoded chars 
 --

 Key: HDFS-4943
 URL: https://issues.apache.org/jira/browse/HDFS-4943
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 1.2.0, 1.1.2, 2.0.4-alpha
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HDFS-4943-trunk.patch, HDFS-4943-trunk-v2.patch


 In HBase, the WAL (hlog) file name on hdfs is URL encoded. For example, 
 hdtest010%2C60020%2C1371000602151.1371058984668
 When we use webhdfs client to access the hlog file via httpfs, it does not 
 work in this case.
 $ hadoop fs -ls hdfs:///user/biadmin/hbase_hlogs  
  
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls 
 hdfs:///user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $ hadoop fs -ls webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs
 Found 1 items
 -rw-r--r--   3 biadmin supergroup   15049470 2013-06-12 10:45 
 /user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 $
 $ hadoop fs -ls 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668
 13/06/27 18:36:08 DEBUG web.WebHdfsFileSystem: Original exception is
 org.apache.hadoop.ipc.RemoteException:java.io.FileNotFoundException:File does 
 not exist: 
 /user/biadmin/hbase_hlogs/hdtest010,60020,1371000602151.1371058984668
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:114)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:299)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$500(WebHdfsFileSystem.java:104)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.getResponse(WebHdfsFileSystem.java:641)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$Runner.run(WebHdfsFileSystem.java:538)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:468)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:662)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:673)
 at org.apache.hadoop.fs.FileSystem.getFileStatus(FileSystem.java:1365)
 at 
 org.apache.hadoop.fs.FileSystem.globStatusInternal(FileSystem.java:1048)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:987)
 at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:965)
 at org.apache.hadoop.fs.FsShell.ls(FsShell.java:573)
 at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1571)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:1789)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895)
 ls: Cannot access 
 webhdfs://hdtest010:14000/user/biadmin/hbase_hlogs/hdtest010%2C60020%2C1371000602151.1371058984668:
  No such file or directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697921#comment-13697921
 ] 

Hadoop QA commented on HDFS-4860:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590431/HDFS-4860.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4586//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4586//console

This message is automatically generated.

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
 Attachments: HDFS-4860.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Attachment: (was: HDFS-4860.diff)

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer

 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Status: Open  (was: Patch Available)

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha, 0.20.204.1, 3.0.0, 2.1.0-beta
Reporter: Trevor Lorimer
 Attachments: HDFS-4860-3.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Attachment: HDFS-4860-3.diff

Uploaded new patch HDFS-4860-3.diff

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
 Attachments: HDFS-4860-3.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Lorimer updated HDFS-4860:
-

Status: Patch Available  (was: Open)

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.4-alpha, 0.20.204.1, 3.0.0, 2.1.0-beta
Reporter: Trevor Lorimer
 Attachments: HDFS-4860-3.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4841) FsShell commands using secure webhfds fail ClientFinalizer shutdown hook

2013-07-02 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned HDFS-4841:
---

Assignee: Robert Kanter

 FsShell commands using secure webhfds fail ClientFinalizer shutdown hook
 

 Key: HDFS-4841
 URL: https://issues.apache.org/jira/browse/HDFS-4841
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security, webhdfs
Affects Versions: 3.0.0
Reporter: Stephen Chu
Assignee: Robert Kanter
 Attachments: core-site.xml, 
 hadoop-root-namenode-hdfs-upgrade-pseudo.ent.cloudera.com.out, hdfs-site.xml, 
 jsvc.out


 Hadoop version:
 {code}
 bash-4.1$ $HADOOP_HOME/bin/hadoop version
 Hadoop 3.0.0-SNAPSHOT
 Subversion git://github.com/apache/hadoop-common.git -r 
 d5373b9c550a355d4e91330ba7cc8f4c7c3aac51
 Compiled by root on 2013-05-22T08:06Z
 From source with checksum 8c4cc9b1e8d6e8361431e00f64483f
 This command was run using 
 /var/lib/hadoop-hdfs/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-SNAPSHOT.jar
 {code}
 I'm seeing a problem when issuing FsShell commands using the webhdfs:// URI 
 when security is enabled. The command completes but leaves a warning that 
 ShutdownHook 'ClientFinalizer' failed.
 {code}
 bash-4.1$ hadoop-3.0.0-SNAPSHOT/bin/hadoop fs -ls 
 webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/
 2013-05-22 09:46:55,710 INFO  [main] util.Shell 
 (Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
 Found 3 items
 drwxr-xr-x   - hbase supergroup  0 2013-05-22 09:46 
 webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/hbase
 drwxr-xr-x   - hdfs  supergroup  0 2013-05-22 09:46 
 webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/tmp
 drwxr-xr-x   - hdfs  supergroup  0 2013-05-22 09:46 
 webhdfs://hdfs-upgrade-pseudo.ent.cloudera.com:50070/user
 2013-05-22 09:46:58,660 WARN  [Thread-3] util.ShutdownHookManager 
 (ShutdownHookManager.java:run(56)) - ShutdownHook 'ClientFinalizer' failed, 
 java.lang.IllegalStateException: Shutdown in progress, cannot add a 
 shutdownHook
 java.lang.IllegalStateException: Shutdown in progress, cannot add a 
 shutdownHook
   at 
 org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:152)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2400)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2372)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$DtRenewer.getWebHdfs(WebHdfsFileSystem.java:1001)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$DtRenewer.cancel(WebHdfsFileSystem.java:1013)
   at org.apache.hadoop.security.token.Token.cancel(Token.java:382)
   at 
 org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:152)
   at 
 org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58)
   at 
 org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(DelegationTokenRenewer.java:241)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:822)
   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2446)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2463)
   at 
 org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
 {code}
 I've checked that FsShell + hdfs:// commands and WebHDFS operations through 
 curl work successfully:
 {code}
 bash-4.1$ hadoop-3.0.0-SNAPSHOT/bin/hadoop fs -ls /
 2013-05-22 09:46:43,663 INFO  [main] util.Shell 
 (Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
 Found 3 items
 drwxr-xr-x   - hbase supergroup  0 2013-05-22 09:46 /hbase
 drwxr-xr-x   - hdfs  supergroup  0 2013-05-22 09:46 /tmp
 drwxr-xr-x   - hdfs  supergroup  0 2013-05-22 09:46 /user
 bash-4.1$ curl -i --negotiate -u : 
 http://hdfs-upgrade-pseudo.ent.cloudera.com:50070/webhdfs/v1/?op=GETHOMEDIRECTORY;
 HTTP/1.1 401 
 Cache-Control: must-revalidate,no-cache,no-store
 Date: Wed, 22 May 2013 16:47:14 GMT
 Pragma: no-cache
 Date: Wed, 22 May 2013 16:47:14 GMT
 Pragma: no-cache
 Content-Type: text/html; charset=iso-8859-1
 WWW-Authenticate: Negotiate
 Set-Cookie: hadoop.auth=;Path=/;Expires=Thu, 01-Jan-1970 00:00:00 GMT
 Content-Length: 1358
 Server: Jetty(6.1.26)
 HTTP/1.1 200 OK
 Cache-Control: no-cache
 Expires: Thu, 01-Jan-1970 00:00:00 GMT
 Date: Wed, 22 May 2013 16:47:14 GMT
 Pragma: no-cache
 Date: Wed, 22 May 2013 16:47:14 GMT
 Pragma: no-cache
 Content-Type: application/json
 Set-Cookie: 
 

[jira] [Updated] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4762:
-

Fix Version/s: 3.0.0

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698021#comment-13698021
 ] 

Brandon Li commented on HDFS-4762:
--

Thank you, Nicholas. I've committed the patch to trunk.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4762:
-

  Resolution: Fixed
Target Version/s: 3.0.0
  Status: Resolved  (was: Patch Available)

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4947) Add NFS server export table to control export by hostname or IP range

2013-07-02 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4947:


 Summary: Add NFS server export table to control export by hostname 
or IP range
 Key: HDFS-4947
 URL: https://issues.apache.org/jira/browse/HDFS-4947
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Jing Zhao




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698025#comment-13698025
 ] 

Hudson commented on HDFS-4762:
--

Integrated in Hadoop-trunk-Commit #4030 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4030/])
HDFS-4762 Provide HDFS based NFSv3 and Mountd implementation. Contributed 
by Brandon Li (Revision 1499029)

 Result = FAILURE
brandonli : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499029
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/README.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/findbugsExcludeFile.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/Mountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/AsyncDataService.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/DFSClientCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/LruCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OffsetRange.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/WriteManager.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestOutOfOrderWrite.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestPortmapRegister.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/TestUdpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestOffsetRange.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestRpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml


 Provide 

[jira] [Commented] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Trevor Lorimer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698036#comment-13698036
 ] 

Trevor Lorimer commented on HDFS-4860:
--

Thanks for the notes Todd, in the previous version I was splitting the string 
for display reasons. 
For this latest update I am just sending the data as it is presented from 
JournalManager, the data can be reformatted as required by the consumer.

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
 Attachments: HDFS-4860-3.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698039#comment-13698039
 ] 

Kihwal Lee commented on HDFS-4762:
--

The trunk build is failing for me. Am I doing something wrong?

{noformat}
[ERROR]   The project org.apache.hadoop:hadoop-hdfs-nfs:3.0.0-SNAPSHOT 
(/home/kihwal/devel/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml) has 1 
error
[ERROR] 'dependencies.dependency.version' for 
org.apache.hadoop:hadoop-nfs:jar is missing. @ line 44, column 17
{noformat}

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698048#comment-13698048
 ] 

Brandon Li commented on HDFS-4762:
--

My bad. I missed some files. Let me fix it.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4851) Deadlock in pipeline recovery

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698073#comment-13698073
 ] 

Hadoop QA commented on HDFS-4851:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12589794/hdfs-4851-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4588//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4588//console

This message is automatically generated.

 Deadlock in pipeline recovery
 -

 Key: HDFS-4851
 URL: https://issues.apache.org/jira/browse/HDFS-4851
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-4851-1.patch


 Here's a deadlock scenario that cropped up during pipeline recovery, debugged 
 through jstacks. Todd tipped me off to this one.
 # Pipeline fails, client initiates recovery. We have the old leftover 
 DataXceiver, and a new one doing recovery.
 # New DataXceiver does {{recoverRbw}}, grabbing the {{FsDatasetImpl}} lock
 # Old DataXceiver is in {{BlockReceiver#computePartialChunkCrc}}, calls 
 {{FsDatasetImpl#getTmpInputStreams}} and blocks on the {{FsDatasetImpl}} lock.
 # New DataXceiver {{ReplicaInPipeline#stopWriter}}, interrupting the old 
 DataXceiver and then joining on it.
 # Boom, deadlock. New DX holds the {{FsDatasetImpl}} lock and is joining on 
 the old DX, which is in turn waiting on the {{FsDatasetImpl}} lock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698104#comment-13698104
 ] 

Hadoop QA commented on HDFS-4860:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590460/HDFS-4860-3.diff
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4589//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4589//console

This message is automatically generated.

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
 Attachments: HDFS-4860-3.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HDFS-4948:
-

 Summary: mvn site for hadoop-hdfs-nfs fails
 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans


Running mvn site on trunk results in the following error.
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
file 
/home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
 to copy. - [Help 1]
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4948:
-

Assignee: Brandon Li

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li

 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4948:
-

Attachment: HDFS-4948.patch

Upload the patch which cleans the hadoop-hdfs-nfs/pom.xml file.

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4949) Centralized cache management in HDFS

2013-07-02 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-4949:
-

 Summary: Centralized cache management in HDFS
 Key: HDFS-4949
 URL: https://issues.apache.org/jira/browse/HDFS-4949
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Andrew Wang


HDFS currently has no support for managing or exposing in-memory caches at 
datanodes. This makes it harder for higher level application frameworks like 
Hive, Pig, and Impala to effectively use cluster memory, because they cannot 
explicitly cache important datasets or place their tasks for memory locality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4949) Centralized cache management in HDFS

2013-07-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4949:
--

Attachment: caching-design-doc-2013-07-02.pdf

Here's a design doc that we've been working on internally. It proposes adding 
off-heap caches to each datanode using mmap and mlock, managed centrally by the 
NameNode.

Any feedback welcomed. I'm hoping we can have a fruitful design discussion on 
this JIRA, then perhaps get a branch and start development.

 Centralized cache management in HDFS
 

 Key: HDFS-4949
 URL: https://issues.apache.org/jira/browse/HDFS-4949
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Andrew Wang
 Attachments: caching-design-doc-2013-07-02.pdf


 HDFS currently has no support for managing or exposing in-memory caches at 
 datanodes. This makes it harder for higher level application frameworks like 
 Hive, Pig, and Impala to effectively use cluster memory, because they cannot 
 explicitly cache important datasets or place their tasks for memory locality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-3499) Make NetworkTopology support user specified topology class

2013-07-02 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HDFS-3499.
--

   Resolution: Fixed
Fix Version/s: 1.2.0
   2.1.0-beta

HADOOP-8469 already address most of it, so resolve it here.

 Make NetworkTopology support user specified topology class
 --

 Key: HDFS-3499
 URL: https://issues.apache.org/jira/browse/HDFS-3499
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.1.0-beta, 1.2.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4950) newly created files in fuse_dfs appear to be length 0 for a while

2013-07-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4950:
--

 Summary: newly created files in fuse_dfs appear to be length 0 for 
a while
 Key: HDFS-4950
 URL: https://issues.apache.org/jira/browse/HDFS-4950
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe


For some reason, newly created files in fuse_dfs appear to be length 0 for a 
while.

{code}
cmccabe@keter:~ echo hi  hi
cmccabe@keter:~ mv hi /mnt/tmp/hi
cmccabe@keter:~ ls -l /mnt/tmp
total 0
-rw-r--r-- 1 cmccabe users 0 Jul  2 13:24 hi
cmccabe@keter:~ cat /mnt/tmp/hi
cmccabe@keter:~ cat /mnt/tmp/hi
hi
{code}

Disabling FUSE attribute caching fixes this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4950) newly created files in fuse_dfs appear to be length 0 for a while

2013-07-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4950:
---

  Component/s: fuse-dfs
 Target Version/s: 2.2.0
Affects Version/s: 2.2.0

 newly created files in fuse_dfs appear to be length 0 for a while
 -

 Key: HDFS-4950
 URL: https://issues.apache.org/jira/browse/HDFS-4950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe

 For some reason, newly created files in fuse_dfs appear to be length 0 for a 
 while.
 {code}
 cmccabe@keter:~ echo hi  hi
 cmccabe@keter:~ mv hi /mnt/tmp/hi
 cmccabe@keter:~ ls -l /mnt/tmp
 total 0
 -rw-r--r-- 1 cmccabe users 0 Jul  2 13:24 hi
 cmccabe@keter:~ cat /mnt/tmp/hi
 cmccabe@keter:~ cat /mnt/tmp/hi
 hi
 {code}
 Disabling FUSE attribute caching fixes this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4949) Centralized cache management in HDFS

2013-07-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HDFS-4949:
-

Assignee: Andrew Wang

 Centralized cache management in HDFS
 

 Key: HDFS-4949
 URL: https://issues.apache.org/jira/browse/HDFS-4949
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Affects Versions: 3.0.0, 2.2.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: caching-design-doc-2013-07-02.pdf


 HDFS currently has no support for managing or exposing in-memory caches at 
 datanodes. This makes it harder for higher level application frameworks like 
 Hive, Pig, and Impala to effectively use cluster memory, because they cannot 
 explicitly cache important datasets or place their tasks for memory locality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4950) newly created files in fuse_dfs appear to be length 0 for a while

2013-07-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4950:
---

Attachment: 2013-07-02.incorrect-attr-trace.txt
2013-07-02.correct-noattr-trace.txt

here are two traces that demonstrate the problem.  The issue seems to be that 
after the fuse RELEASE operation, we never call GETATTR again when attribute 
caching is enabled.

One easy fix for this would be to disable attribute caching entirely.  This 
would certainly fix the bug, but it might result in lower performance.  As you 
can see from the correct noattr trace, many more GETATTR operations are done 
in this, all of which will hit the NameNode.

Can we live with the fuse attribute cache?  This raises the question of how 
we're supposed to invalidate the fuse_dfs attribute cache.  I wasn't able to 
find any documentation about this.  I can see that fuse is checking the 
attributes of the root directory after the release.

{code}
   unique: 18, success, outsize: 16
unique: 19, opcode: RELEASE (18), nodeid: 2, insize: 64, pid: 0
release[140595351837776] flags: 0x8001
   unique: 19, success, outsize: 16
unique: 20, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 6597
getattr /
   unique: 20, success, outsize: 120
{code}

Is it possible that FUSE expects something to change there if a new file has 
been added?

 newly created files in fuse_dfs appear to be length 0 for a while
 -

 Key: HDFS-4950
 URL: https://issues.apache.org/jira/browse/HDFS-4950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
 Attachments: 2013-07-02.correct-noattr-trace.txt, 
 2013-07-02.incorrect-attr-trace.txt


 For some reason, newly created files in fuse_dfs appear to be length 0 for a 
 while.
 {code}
 cmccabe@keter:~ echo hi  hi
 cmccabe@keter:~ mv hi /mnt/tmp/hi
 cmccabe@keter:~ ls -l /mnt/tmp
 total 0
 -rw-r--r-- 1 cmccabe users 0 Jul  2 13:24 hi
 cmccabe@keter:~ cat /mnt/tmp/hi
 cmccabe@keter:~ cat /mnt/tmp/hi
 hi
 {code}
 Disabling FUSE attribute caching fixes this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4950) newly created files in fuse_dfs appear to be length 0 for a while due to attribute caching

2013-07-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4950:
---

Summary: newly created files in fuse_dfs appear to be length 0 for a while 
due to attribute caching  (was: newly created files in fuse_dfs appear to be 
length 0 for a while)

 newly created files in fuse_dfs appear to be length 0 for a while due to 
 attribute caching
 --

 Key: HDFS-4950
 URL: https://issues.apache.org/jira/browse/HDFS-4950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
 Attachments: 2013-07-02.correct-noattr-trace.txt, 
 2013-07-02.incorrect-attr-trace.txt


 For some reason, newly created files in fuse_dfs appear to be length 0 for a 
 while.
 {code}
 cmccabe@keter:~ echo hi  hi
 cmccabe@keter:~ mv hi /mnt/tmp/hi
 cmccabe@keter:~ ls -l /mnt/tmp
 total 0
 -rw-r--r-- 1 cmccabe users 0 Jul  2 13:24 hi
 cmccabe@keter:~ cat /mnt/tmp/hi
 cmccabe@keter:~ cat /mnt/tmp/hi
 hi
 {code}
 Disabling FUSE attribute caching fixes this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4948:
-

Status: Patch Available  (was: Open)

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4945) A Distributed and Cooperative NameNode Cluster for a Highly-Available HDFS

2013-07-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698281#comment-13698281
 ] 

Konstantin Shvachko commented on HDFS-4945:
---

Yonghwan, the ideas sound interesting. But it looks to me like a new file 
system rather than a new feature of HDFS. Do you plan to replace HDFS or evolve 
it?
I've been working on 
[Giraffa|http://code.google.com/a/apache-extras.org/p/giraffa/source/browse/?name=trunk]
 project. Is it similar to your ideas?
You were saying we on several occasions. Who do you mean?

 A Distributed and Cooperative NameNode Cluster for a Highly-Available HDFS
 --

 Key: HDFS-4945
 URL: https://issues.apache.org/jira/browse/HDFS-4945
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: auto-failover
Affects Versions: HA branch (HDFS-1623)
Reporter: Yonghwan Kim
  Labels: documentation

 See the following comment for detailed description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698288#comment-13698288
 ] 

Hadoop QA commented on HDFS-4948:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590502/HDFS-4948.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4590//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4590//console

This message is automatically generated.

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698297#comment-13698297
 ] 

Brandon Li commented on HDFS-4948:
--

Unit test is not needed since it's not a code change.


 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-07-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698301#comment-13698301
 ] 

Suresh Srinivas commented on HDFS-4750:
---

[~brandonli] If this work is completed, given all the jiras going in, can you 
please merge this to branch 2.1?

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, HDFS-4750.patch, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4942) Add retry cache support in Namenode

2013-07-02 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4942:
--

Attachment: HDFSRetryCache.pdf

Here is the early version of the design document. Still pending is the analysis 
of all the Namenode RPC requests. I will post that soon.

Given that we plan on adding a unique identifier  to every RPC request, should 
we get this change done before 2.1.0-beta rc2 is built? This way 2.1.0-beta 
clients can utilize retry cache as well.

 Add retry cache support in Namenode
 ---

 Key: HDFS-4942
 URL: https://issues.apache.org/jira/browse/HDFS-4942
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFSRetryCache.pdf


 In current HA mechanism with FailoverProxyProvider and non HA setups with 
 RetryProxy retry a request from the RPC layer. If the retried request has 
 already been processed at the namenode, the subsequent attempts fail for 
 non-idempotent operations such as  create, append, delete, rename etc. This 
 will cause application failures during HA failover, network issues etc.
 This jira proposes adding retry cache at the namenode to handle these 
 failures. More details in the comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698367#comment-13698367
 ] 

Suresh Srinivas commented on HDFS-4948:
---

+1 for the change.

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4951) FsShell commands using secure httpfs throw exceptions due to missing TokenRenewer

2013-07-02 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HDFS-4951:


Description: 
It looks like there isn't a {{TokenRenewer}} for HttpFS delegation tokens 
({{HTTPFS_DELEGATION_TOKENS}} tokens, so when it goes to cancel the token, it 
throws an exception:

{noformat}
$ hadoop fs -ls webhdfs://host:14000
// File listing omitted
13/06/21 13:09:04 WARN token.Token: No TokenRenewer defined for token kind 
HTTPFS_DELEGATION_TOKEN
13/06/21 13:09:04 WARN util.ShutdownHookManager: ShutdownHook 'ClientFinalizer' 
failed, java.lang.UnsupportedOperationException: Token cancel is not supported  
for HTTPFS_DELEGATION_TOKEN tokens
java.lang.UnsupportedOperationException: Token cancel is not supported  for 
HTTPFS_DELEGATION_TOKEN tokens
at 
org.apache.hadoop.security.token.Token$TrivialRenewer.cancel(Token.java:417)
at org.apache.hadoop.security.token.Token.cancel(Token.java:382)
at 
org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:146)
at 
org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58)
at 
org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(DelegationTokenRenewer.java:233)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:790)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2398)
at 
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2414)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
{noformat}

WebHDFS doesn't have this problem because it has a {{TokenRenewer}} for its 
delegation tokens ({{WEBHDFS delegation}} tokens).  

  was:
It looks like there isn't a {{TokenRenewer}} for HttpFS delegation tokens 
({{HTTPFS_DELEGATION_TOKENS}} tokens, so when it goes to cancel the token, it 
throws an exception:

{noformat}
$ hadoop fs -ls webhdfs://host:14000
// File listing omitted
13/06/21 13:09:04 WARN token.Token: No TokenRenewer defined for token kind 
HTTPFS_DELEGATION_TOKEN
13/06/21 13:09:04 WARN util.ShutdownHookManager: ShutdownHook 'ClientFinalizer' 
failed, java.lang.UnsupportedOperationException: Token cancel is not supported  
for HTTPFS_DELEGATION_TOKEN tokens
java.lang.UnsupportedOperationException: Token cancel is not supported  for 
HTTPFS_DELEGATION_TOKEN tokens
at 
org.apache.hadoop.security.token.Token$TrivialRenewer.cancel(Token.java:417)
at org.apache.hadoop.security.token.Token.cancel(Token.java:382)
at 
org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:146)
at 
org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58)
at 
org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(DelegationTokenRenewer.java:233)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:790)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2398)
at 
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2414)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
{noformat}

WebHDFS doesn't have this problem because it has a {{TokenRenewer}}.


 FsShell commands using secure httpfs throw exceptions due to missing 
 TokenRenewer
 -

 Key: HDFS-4951
 URL: https://issues.apache.org/jira/browse/HDFS-4951
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Robert Kanter
Assignee: Robert Kanter

 It looks like there isn't a {{TokenRenewer}} for HttpFS delegation tokens 
 ({{HTTPFS_DELEGATION_TOKENS}} tokens, so when it goes to cancel the token, it 
 throws an exception:
 {noformat}
 $ hadoop fs -ls webhdfs://host:14000
 // File listing omitted
 13/06/21 13:09:04 WARN token.Token: No TokenRenewer defined for token kind 
 HTTPFS_DELEGATION_TOKEN
 13/06/21 13:09:04 WARN util.ShutdownHookManager: ShutdownHook 
 'ClientFinalizer' failed, java.lang.UnsupportedOperationException: Token 
 cancel is not supported  for HTTPFS_DELEGATION_TOKEN tokens
 java.lang.UnsupportedOperationException: Token cancel is not supported  for 
 HTTPFS_DELEGATION_TOKEN tokens
   at 
 org.apache.hadoop.security.token.Token$TrivialRenewer.cancel(Token.java:417)
   at org.apache.hadoop.security.token.Token.cancel(Token.java:382)
   at 
 org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:146)
   at 
 org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58)
   at 
 

[jira] [Created] (HDFS-4951) FsShell commands using secure httpfs throw exceptions due to missing TokenRenewer

2013-07-02 Thread Robert Kanter (JIRA)
Robert Kanter created HDFS-4951:
---

 Summary: FsShell commands using secure httpfs throw exceptions due 
to missing TokenRenewer
 Key: HDFS-4951
 URL: https://issues.apache.org/jira/browse/HDFS-4951
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Robert Kanter
Assignee: Robert Kanter


It looks like there isn't a {{TokenRenewer}} for HttpFS delegation tokens 
({{HTTPFS_DELEGATION_TOKENS}} tokens, so when it goes to cancel the token, it 
throws an exception:

{noformat}
$ hadoop fs -ls webhdfs://host:14000
// File listing omitted
13/06/21 13:09:04 WARN token.Token: No TokenRenewer defined for token kind 
HTTPFS_DELEGATION_TOKEN
13/06/21 13:09:04 WARN util.ShutdownHookManager: ShutdownHook 'ClientFinalizer' 
failed, java.lang.UnsupportedOperationException: Token cancel is not supported  
for HTTPFS_DELEGATION_TOKEN tokens
java.lang.UnsupportedOperationException: Token cancel is not supported  for 
HTTPFS_DELEGATION_TOKEN tokens
at 
org.apache.hadoop.security.token.Token$TrivialRenewer.cancel(Token.java:417)
at org.apache.hadoop.security.token.Token.cancel(Token.java:382)
at 
org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.cancel(DelegationTokenRenewer.java:146)
at 
org.apache.hadoop.fs.DelegationTokenRenewer$RenewAction.access$200(DelegationTokenRenewer.java:58)
at 
org.apache.hadoop.fs.DelegationTokenRenewer.removeRenewAction(DelegationTokenRenewer.java:233)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.close(WebHdfsFileSystem.java:790)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2398)
at 
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2414)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
{noformat}

WebHDFS doesn't have this problem because it has a {{TokenRenewer}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4952) dfs -ls hftp:// fails on secure hadoop2 cluster

2013-07-02 Thread yeshavora (JIRA)
yeshavora created HDFS-4952:
---

 Summary: dfs -ls hftp:// fails on secure hadoop2 cluster
 Key: HDFS-4952
 URL: https://issues.apache.org/jira/browse/HDFS-4952
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: yeshavora


Running :hadoop dfs -ls hftp://namenode:namenodeport/A
WARN fs.FileSystem: Couldn't connect to http://namenode:50470, assuming 
security is disabled
ls: Security enabled but user not authenticated by filter



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698407#comment-13698407
 ] 

Brandon Li commented on HDFS-4948:
--

I've committed to trunk. Thanks!

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4948:
-

Fix Version/s: 3.0.0

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4948:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698409#comment-13698409
 ] 

Brandon Li commented on HDFS-4762:
--

It should be fixed now.

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4763) Add script changes/utility for starting NFS gateway

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4763:
-

Component/s: nfs

 Add script changes/utility for starting NFS gateway
 ---

 Key: HDFS-4763
 URL: https://issues.apache.org/jira/browse/HDFS-4763
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4750) Support NFSv3 interface to HDFS

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4750:
-

Component/s: nfs

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, HDFS-4750.patch, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4947) Add NFS server export table to control export by hostname or IP range

2013-07-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4947:
-

Component/s: nfs

 Add NFS server export table to control export by hostname or IP range
 -

 Key: HDFS-4947
 URL: https://issues.apache.org/jira/browse/HDFS-4947
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Jing Zhao



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698416#comment-13698416
 ] 

Hudson commented on HDFS-4948:
--

Integrated in Hadoop-trunk-Commit #4036 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4036/])
HDFS-4948. mvn site for hadoop-hdfs-nfs fails. Contributed by Brandon Li 
(Revision 1499152)

 Result = SUCCESS
brandonli : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499152
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Fix For: 3.0.0

 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4942) Add retry cache support in Namenode

2013-07-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698423#comment-13698423
 ] 

Chris Nauroth commented on HDFS-4942:
-

The proposal looks good, and I'll be interested to see the analysis of the 
individual RPC calls.  Reminder on something that came up in offline 
conversation: it appears that we can change 
{{ClientProtocol#getDataEncryptionKey}} to annotate it as Idempotent.  It 
doesn't appear to mutate state.  If a retry causes creation of multiple keys, 
that shouldn't be a problem.

{quote}
Given that we plan on adding a unique identifier to every RPC request, should 
we get this change done before 2.1.0-beta rc2 is built? This way 2.1.0-beta 
clients can utilize retry cache as well.
{quote}

+1 for this idea.  Adding the UUID now would be a low-risk change.

 Add retry cache support in Namenode
 ---

 Key: HDFS-4942
 URL: https://issues.apache.org/jira/browse/HDFS-4942
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFSRetryCache.pdf


 In current HA mechanism with FailoverProxyProvider and non HA setups with 
 RetryProxy retry a request from the RPC layer. If the retried request has 
 already been processed at the namenode, the subsequent attempts fail for 
 non-idempotent operations such as  create, append, delete, rename etc. This 
 will cause application failures during HA failover, network issues etc.
 This jira proposes adding retry cache at the namenode to handle these 
 failures. More details in the comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4942) Add retry cache support in Namenode

2013-07-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698425#comment-13698425
 ] 

Suresh Srinivas commented on HDFS-4942:
---

I create HADOOP-9688 to add unique request ID to RPC requests. I also have 
posted an early patch.

 Add retry cache support in Namenode
 ---

 Key: HDFS-4942
 URL: https://issues.apache.org/jira/browse/HDFS-4942
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFSRetryCache.pdf


 In current HA mechanism with FailoverProxyProvider and non HA setups with 
 RetryProxy retry a request from the RPC layer. If the retried request has 
 already been processed at the namenode, the subsequent attempts fail for 
 non-idempotent operations such as  create, append, delete, rename etc. This 
 will cause application failures during HA failover, network issues etc.
 This jira proposes adding retry cache at the namenode to handle these 
 failures. More details in the comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HDFS-4942) Add retry cache support in Namenode

2013-07-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698425#comment-13698425
 ] 

Suresh Srinivas edited comment on HDFS-4942 at 7/3/13 12:21 AM:


I created HADOOP-9688 to add unique request ID to RPC requests. I also have 
posted an early patch.

  was (Author: sureshms):
I create HADOOP-9688 to add unique request ID to RPC requests. I also have 
posted an early patch.
  
 Add retry cache support in Namenode
 ---

 Key: HDFS-4942
 URL: https://issues.apache.org/jira/browse/HDFS-4942
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFSRetryCache.pdf


 In current HA mechanism with FailoverProxyProvider and non HA setups with 
 RetryProxy retry a request from the RPC layer. If the retried request has 
 already been processed at the namenode, the subsequent attempts fail for 
 non-idempotent operations such as  create, append, delete, rename etc. This 
 will cause application failures during HA failover, network issues etc.
 This jira proposes adding retry cache at the namenode to handle these 
 failures. More details in the comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4953) enable HDFS local reads via mmap

2013-07-02 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4953:
--

 Summary: enable HDFS local reads via mmap
 Key: HDFS-4953
 URL: https://issues.apache.org/jira/browse/HDFS-4953
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Currently, the short-circuit local read pathway allows HDFS clients to access 
files directly without going through the DataNode.  However, all of these reads 
involve a copy at the operating system level, since they rely on the read() / 
pread() / etc family of kernel interfaces.

We would like to enable HDFS to read local files via mmap.  This would enable 
truly zero-copy reads.

In the initial implementation, zero-copy reads will only be performed when 
checksums were disabled.  Later, we can use the DataNode's cache awareness to 
only perform zero-copy reads when we know that checksum has already been 
verified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-427) fuse-dfs should support symlinks

2013-07-02 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698482#comment-13698482
 ] 

Colin Patrick McCabe commented on HDFS-427:
---

note: this also includes implementing the symlink resolving logic in 
{{Trash#moveToAppropriateTrash}}

 fuse-dfs should support symlinks
 

 Key: HDFS-427
 URL: https://issues.apache.org/jira/browse/HDFS-427
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fuse-dfs
Reporter: Pete Wyckoff

 implement dfs_symlink(from, to) and dfs_readlink(path,buf,size)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4504) DFSOutputStream#close doesn't always release resources (such as leases)

2013-07-02 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698487#comment-13698487
 ] 

Colin Patrick McCabe commented on HDFS-4504:


That's an interesting idea-- calling recoverLease from the client itself.  It 
might have the advantage of requiring less new code, compared to adding a new 
flag to {{complete()}}.

 DFSOutputStream#close doesn't always release resources (such as leases)
 ---

 Key: HDFS-4504
 URL: https://issues.apache.org/jira/browse/HDFS-4504
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4504.001.patch, HDFS-4504.002.patch


 {{DFSOutputStream#close}} can throw an {{IOException}} in some cases.  One 
 example is if there is a pipeline error and then pipeline recovery fails.  
 Unfortunately, in this case, some of the resources used by the 
 {{DFSOutputStream}} are leaked.  One particularly important resource is file 
 leases.
 So it's possible for a long-lived HDFS client, such as Flume, to write many 
 blocks to a file, but then fail to close it.  Unfortunately, the 
 {{LeaseRenewerThread}} inside the client will continue to renew the lease for 
 the undead file.  Future attempts to close the file will just rethrow the 
 previous exception, and no progress can be made by the client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4465) Optimize datanode ReplicasMap and ReplicaInfo

2013-07-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-4465:
-

Attachment: HDFS-4465.patch

Here's an updated patch which should address all of your feedback, Suresh.

# Good thinking. I did some back of the envelope math which suggested that even 
1% was probably higher than necessary for a typical DN. Switched this to 0.5%.
# Per previous discussion, left it extending Block and added a comment.
# Good thinking. Moved the parsing code to a separate static function and added 
a test for it.
# In my testing with a DN with ~1MM blocks, this patch makes each replica go 
from using ~635 bytes per replica to ~250 bytes per replica, so about a 2.5x 
improvement.

Note that to address the findbugs warning I had to add an exception to the 
findbugs exclude file, since in this patch I am very deliberately using the 
String(String) constructor so as to trim the underlying char[] array.

 Optimize datanode ReplicasMap and ReplicaInfo
 -

 Key: HDFS-4465
 URL: https://issues.apache.org/jira/browse/HDFS-4465
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.0.5-alpha
Reporter: Suresh Srinivas
Assignee: Aaron T. Myers
 Attachments: dn-memory-improvements.patch, HDFS-4465.patch, 
 HDFS-4465.patch


 In Hadoop a lot of optimization has been done in namenode data structures to 
 be memory efficient. Similar optimizations are necessary for Datanode 
 process. With the growth in storage per datanode and number of blocks hosted 
 on datanode, this jira intends to optimize long lived ReplicasMap and 
 ReplicaInfo objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2013-07-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4913:
---

Attachment: HDFS-4913.002.patch

The issue here, I believe, is that fuse_dfs is always putting the file into 
/user/root/.Trash/Current, whereas it should be putting it into 
{{/user/${USERNAME}/.Trash/Current}}.

We can get username from the fuse_context.  Although it comes as a UID, we can 
map it back to a string.  This patch does that.

 Deleting file through fuse-dfs when using trash fails requiring root 
 permissions
 

 Key: HDFS-4913
 URL: https://issues.apache.org/jira/browse/HDFS-4913
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.0.3-alpha
Reporter: Stephen Chu
 Attachments: HDFS-4913.002.patch


 As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
 As _testuser_, I cd into the mount and touch a test file at 
 _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
 into an error:
 {code}
 [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
 [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
 [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
 rm: cannot remove `testFile1': Unknown error 255
 {code}
 I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
 /user/root/.Trash, which testuser doesn't have permissions to.
 Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
 /user/testuser/.Trash instead of /user/root/.Trash.
 Error in debug:
 {code}
 unlink /user/testuser/testFile1
 hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
 FileSystem#mkdirs error:
 org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=WRITE, inode=/user/root:root:supergroup:drwxr-xr-x
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
  at 
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
  at 
 java.security.AccessController.doPrivileged(Native Method)
  at 
 javax.security.auth.Subject.doAs(Subject.java:396)
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
  at 
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
  at 
 

[jira] [Updated] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2013-07-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4913:
---

Assignee: Colin Patrick McCabe
  Status: Patch Available  (was: Open)

 Deleting file through fuse-dfs when using trash fails requiring root 
 permissions
 

 Key: HDFS-4913
 URL: https://issues.apache.org/jira/browse/HDFS-4913
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.0.3-alpha
Reporter: Stephen Chu
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4913.002.patch


 As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
 As _testuser_, I cd into the mount and touch a test file at 
 _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
 into an error:
 {code}
 [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
 [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
 [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
 rm: cannot remove `testFile1': Unknown error 255
 {code}
 I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
 /user/root/.Trash, which testuser doesn't have permissions to.
 Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
 /user/testuser/.Trash instead of /user/root/.Trash.
 Error in debug:
 {code}
 unlink /user/testuser/testFile1
 hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
 FileSystem#mkdirs error:
 org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=WRITE, inode=/user/root:root:supergroup:drwxr-xr-x
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
  at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
  at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
  at 
 org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
  at 
 java.security.AccessController.doPrivileged(Native Method)
  at 
 javax.security.auth.Subject.doAs(Subject.java:396)
  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
  at 
 org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
  at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
  at 
 

[jira] [Updated] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-4860:
-

Assignee: Trevor Lorimer

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: HDFS-4860-3.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4860) Add additional attributes to JMX beans

2013-07-02 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698537#comment-13698537
 ] 

Konstantin Boudnik commented on HDFS-4860:
--

I like the improvement. And the patch looks good for me
+1

 Add additional attributes to JMX beans
 --

 Key: HDFS-4860
 URL: https://issues.apache.org/jira/browse/HDFS-4860
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 0.20.204.1, 3.0.0, 2.1.0-beta, 2.0.4-alpha
Reporter: Trevor Lorimer
Assignee: Trevor Lorimer
 Attachments: HDFS-4860-3.diff


 Currently the JMX bean returns much of the data contained on the HDFS Health 
 webpage (dfsHealth.html). However there are several other attributes that are 
 required to be added.
 I intend to add the following items to the appropriate bean in parenthesis :
 Started time (NameNodeInfo),
 Compiled info (NameNodeInfo),
 Jvm MaxHeap, MaxNonHeap (JvmMetrics)
 Node Usage stats (i.e. Min, Median, Max, stdev) (NameNodeInfo),
 Count of decommissioned Live and Dead nodes (FSNamesystemState),
 Journal Status (NodeNameInfo)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4465) Optimize datanode ReplicasMap and ReplicaInfo

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698543#comment-13698543
 ] 

Hadoop QA commented on HDFS-4465:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590557/HDFS-4465.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4591//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4591//console

This message is automatically generated.

 Optimize datanode ReplicasMap and ReplicaInfo
 -

 Key: HDFS-4465
 URL: https://issues.apache.org/jira/browse/HDFS-4465
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.0.5-alpha
Reporter: Suresh Srinivas
Assignee: Aaron T. Myers
 Attachments: dn-memory-improvements.patch, HDFS-4465.patch, 
 HDFS-4465.patch


 In Hadoop a lot of optimization has been done in namenode data structures to 
 be memory efficient. Similar optimizations are necessary for Datanode 
 process. With the growth in storage per datanode and number of blocks hosted 
 on datanode, this jira intends to optimize long lived ReplicasMap and 
 ReplicaInfo objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4948) mvn site for hadoop-hdfs-nfs fails

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4948:
-

Fix Version/s: 2.3.0
   2.1.0-beta

 mvn site for hadoop-hdfs-nfs fails
 --

 Key: HDFS-4948
 URL: https://issues.apache.org/jira/browse/HDFS-4948
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Brandon Li
 Fix For: 3.0.0, 2.1.0-beta, 2.3.0

 Attachments: HDFS-4948.patch


 Running mvn site on trunk results in the following error.
 {noformat}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project 
 hadoop-hdfs-nfs: An Ant BuildException has occured: Warning: Could not find 
 file 
 /home/evans/src/hadoop-git/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/resources/hdfs-nfs-default.xml
  to copy. - [Help 1]
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-07-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4762:
-

Fix Version/s: 2.3.0
   2.1.0-beta

 Provide HDFS based NFSv3 and Mountd implementation
 --

 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 2.1.0-beta, 2.3.0

 Attachments: HDFS-4762.patch, HDFS-4762.patch.2, HDFS-4762.patch.3, 
 HDFS-4762.patch.3, HDFS-4762.patch.4, HDFS-4762.patch.5, HDFS-4762.patch.6, 
 HDFS-4762.patch.7


 This is to track the implementation of NFSv3 to HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-07-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698561#comment-13698561
 ] 

Brandon Li commented on HDFS-4750:
--

I've merge the HADOOP-9009,HADOOP-9515,HDFS-4762,HDFS-4948 into branch-2 and 
branch-2.1

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf, HDFS-4750.patch, nfs-trunk.patch


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698577#comment-13698577
 ] 

Hadoop QA commented on HDFS-4913:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590559/HDFS-4913.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4592//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4592//console

This message is automatically generated.

 Deleting file through fuse-dfs when using trash fails requiring root 
 permissions
 

 Key: HDFS-4913
 URL: https://issues.apache.org/jira/browse/HDFS-4913
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 2.0.3-alpha
Reporter: Stephen Chu
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4913.002.patch


 As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
 As _testuser_, I cd into the mount and touch a test file at 
 _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
 into an error:
 {code}
 [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
 [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
 [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
 rm: cannot remove `testFile1': Unknown error 255
 {code}
 I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
 /user/root/.Trash, which testuser doesn't have permissions to.
 Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
 /user/testuser/.Trash instead of /user/root/.Trash.
 Error in debug:
 {code}
 unlink /user/testuser/testFile1
 hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
 FileSystem#mkdirs error:
 org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=WRITE, inode=/user/root:root:supergroup:drwxr-xr-x
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
  at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
  at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)