[jira] [Commented] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723433#comment-13723433
 ] 

Jing Zhao commented on HDFS-5025:
-

Thanks Suresh! I've created HDFS-5045 to address the TODO.

I've run some simple manual HA tests in a 6-node cluster, in which I forced NN 
failover periodically and kept sending client requests by using LoadGenerator. 
Looks like the retry cache works while we still need HADOOP-9792. I will keep 
running HA tests on top of branch-2.1.0-beta.

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch, HDFS-5025.004.patch, 
> HDFS-5025.005.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5045) Add more unit tests for retry cache to cover all AtMostOnce methods

2013-07-29 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-5045:
---

 Summary: Add more unit tests for retry cache to cover all 
AtMostOnce methods
 Key: HDFS-5045
 URL: https://issues.apache.org/jira/browse/HDFS-5045
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723427#comment-13723427
 ] 

Suresh Srinivas commented on HDFS-5025:
---

For the TODO you have in the test, can you please create another jira? Other 
than that this change looks good. Nice tests.

Did you run any manual HA tests?


> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch, HDFS-5025.004.patch, 
> HDFS-5025.005.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3788) distcp can't copy large files using webhdfs due to missing Content-Length header

2013-07-29 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-3788:
--

Target Version/s:   (was: 2.0.2-alpha)

> distcp can't copy large files using webhdfs due to missing Content-Length 
> header
> 
>
> Key: HDFS-3788
> URL: https://issues.apache.org/jira/browse/HDFS-3788
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Critical
> Fix For: 0.23.3, 2.0.2-alpha
>
> Attachments: 20120814NullEntity.patch, distcp-webhdfs-errors.txt, 
> h3788_20120813.patch, h3788_20120814b.patch, h3788_20120814.patch, 
> h3788_20120815.patch, h3788_20120816.patch
>
>
> The following command fails when data1 contains a 3gb file. It passes when 
> using hftp or when the directory just contains smaller (<2gb) files, so looks 
> like a webhdfs issue with large files.
> {{hadoop distcp webhdfs://eli-thinkpad:50070/user/eli/data1 
> hdfs://localhost:8020/user/eli/data2}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5025:


Status: Patch Available  (was: Open)

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch, HDFS-5025.004.patch, 
> HDFS-5025.005.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5025:


Attachment: HDFS-5025.005.patch

Update the patch.

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch, HDFS-5025.004.patch, 
> HDFS-5025.005.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5031) BlockScanner scans the block multiple times and on restart scans everything

2013-07-29 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723351#comment-13723351
 ] 

Aaron T. Myers commented on HDFS-5031:
--

Hi Vinay, not sure that this should be considered a blocker. Is it a 
regression? What actual problem is it causing?

> BlockScanner scans the block multiple times and on restart scans everything
> ---
>
> Key: HDFS-5031
> URL: https://issues.apache.org/jira/browse/HDFS-5031
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Vinay
>Assignee: Vinay
>Priority: Blocker
> Attachments: HDFS-5031.patch
>
>
> BlockScanner scans the block twice, also on restart of datanode scans 
> everything.
> Steps:
> 1. Write blocks with interval of more than 5 seconds. write new block on 
> completion of scan for written block.
> Each time datanode scans new block, it also scans, previous block which is 
> already scanned. 
> Now after restart, datanode scans all blocks again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4949) Centralized cache management in HDFS

2013-07-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723306#comment-13723306
 ] 

Suresh Srinivas commented on HDFS-4949:
---

As discussed in the comments earlier, few of us are going to meet to discuss 
the design and issues related to this jira. I have setup a meeting at 
Hortonworks office. We should be able to host around 15-20 people. I already 
have [~andrew.wang], [~arpitgupta], [~atm], [~bikassaha], [~cmccabe], 
[~sanjay.radia], [~sureshms], [~vinodkv], [~jingzhao] and gopal as attendees. 
Others who want to attend the meeting or want to join over the phone, please 
reach out to me at sur...@hortonworks.com.

We will post notes from the discussion to this jira.

> Centralized cache management in HDFS
> 
>
> Key: HDFS-4949
> URL: https://issues.apache.org/jira/browse/HDFS-4949
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: caching-design-doc-2013-07-02.pdf
>
>
> HDFS currently has no support for managing or exposing in-memory caches at 
> datanodes. This makes it harder for higher level application frameworks like 
> Hive, Pig, and Impala to effectively use cluster memory, because they cannot 
> explicitly cache important datasets or place their tasks for memory locality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved HDFS-5044.
--

Resolution: Duplicate

> dfs -ls should show the character which means symlink and link target
> -
>
> Key: HDFS-5044
> URL: https://issues.apache.org/jira/browse/HDFS-5044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Fix For: 3.0.0
>
> Attachments: HDFS-5044.patch
>
>
> In current implementation of HDFS, dfs -ls doesn't show the character which 
> means symlink and also doesn't show symlink target.
> I expect that shows like as follows.
> lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
> link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723288#comment-13723288
 ] 

Kousuke Saruta commented on HDFS-5044:
--

Andrew, thank you for letting me know.
I got it and close this jira as a dupulication of HDFS-4019.

> dfs -ls should show the character which means symlink and link target
> -
>
> Key: HDFS-5044
> URL: https://issues.apache.org/jira/browse/HDFS-5044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Fix For: 3.0.0
>
> Attachments: HDFS-5044.patch
>
>
> In current implementation of HDFS, dfs -ls doesn't show the character which 
> means symlink and also doesn't show symlink target.
> I expect that shows like as follows.
> lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
> link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4772) Add number of children in HdfsFileStatus

2013-07-29 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723289#comment-13723289
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4772:
--

Thanks, I will review HDFS-5043.

> Add number of children in HdfsFileStatus
> 
>
> Key: HDFS-4772
> URL: https://issues.apache.org/jira/browse/HDFS-4772
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0, 2.1.0-beta, 2.3.0
>
> Attachments: HDFS-4772.2.patch, HDFS-4772.3.patch, 
> HDFS-4772.branch-2.patch, HDFS-4772.patch
>
>
> This JIRA is to track the change to return the number of children for a 
> directory, so the client doesn't need to make a getListing() call to 
> calculate the number of dirents. This makes it convenient for the client to 
> check directory size change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4940) namenode OOMs under Bigtop's TestCLI

2013-07-29 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723282#comment-13723282
 ] 

Arun C Murthy commented on HDFS-4940:
-

[~rvs] Did you get a chance to look at this? I'm inclined to push this to a 
2.1.1-beta to unblock 2.1.0-beta since it'll be good to get f/b on APIs etc. 
while we investigate this further. Makes sense? Thanks.

> namenode OOMs under Bigtop's TestCLI
> 
>
> Key: HDFS-4940
> URL: https://issues.apache.org/jira/browse/HDFS-4940
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Blocker
> Fix For: 2.1.0-beta
>
>
> Bigtop's TestCLI when executed against Hadoop 2.1.0 seems to make it OOM 
> quite reliably regardless of the heap size settings. I'm attaching a heap 
> dump URL. Alliteratively anybody can just take Bigtop's tests, compiled them 
> against Hadoop 2.1.0 bits and try to reproduce it.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723272#comment-13723272
 ] 

Kousuke Saruta commented on HDFS-5033:
--

I think the behavior should be same as the local file system too.
I agree with you.

> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Karthik Kambatla
>Priority: Minor
>  Labels: noob
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723256#comment-13723256
 ] 

Andrew Wang commented on HDFS-5044:
---

Hey Kousuke, Colin's been working on improvements to FSShell related to 
symlinks at HDFS-4019, and I think meets your use case. See:

https://issues.apache.org/jira/browse/HDFS-4019?focusedCommentId=13717784&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13717784

If you find this satisfactory, we can dupe this JIRA to HDFS-4019.


> dfs -ls should show the character which means symlink and link target
> -
>
> Key: HDFS-5044
> URL: https://issues.apache.org/jira/browse/HDFS-5044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Fix For: 3.0.0
>
> Attachments: HDFS-5044.patch
>
>
> In current implementation of HDFS, dfs -ls doesn't show the character which 
> means symlink and also doesn't show symlink target.
> I expect that shows like as follows.
> lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
> link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5028) LeaseRenewer throw java.util.ConcurrentModificationException when timeout

2013-07-29 Thread zhaoyunjiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723253#comment-13723253
 ] 

zhaoyunjiong commented on HDFS-5028:


dfsclients was syncronizing.
The problem here is Iterator.
You can get more information here:
http://stackoverflow.com/questions/8189466/java-util-concurrentmodificationexception

For short:
The iterators returned by this class's iterator and listIterator methods are 
fail-fast: if the list is structurally modified at any time after the iterator 
is created, in any way except through the iterator's own remove or add methods, 
the iterator will throw a ConcurrentModificationException.

c.abort() will remove c(a dfsclient) from dfsclients, so iterator generated in 
"for(DFSClient c : dfsclients)" will throw ConcurrentModificationException.

> LeaseRenewer throw java.util.ConcurrentModificationException when timeout
> -
>
> Key: HDFS-5028
> URL: https://issues.apache.org/jira/browse/HDFS-5028
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.1.0, 2.0.0-alpha
>Reporter: zhaoyunjiong
> Fix For: 1.1.3
>
> Attachments: HDFS-5028-branch-1.1.patch, HDFS-5028.patch
>
>
> In LeaseRenewer, when renew() throw SocketTimeoutException, c.abort() will 
> remove one dfsclient from dfsclients. Here will throw a 
> ConcurrentModificationException because dfsclients changed after the iterator 
> created by "for(DFSClient c : dfsclients)":
> Exception in thread "org.apache.hadoop.hdfs.LeaseRenewer$1@75fa1077" 
> java.util.ConcurrentModificationException
> at 
> java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
> at java.util.AbstractList$Itr.next(AbstractList.java:343)
> at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:406)
> at 
> org.apache.hadoop.hdfs.LeaseRenewer.access$600(LeaseRenewer.java:69)
> at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:273)
> at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723251#comment-13723251
 ] 

Karthik Kambatla commented on HDFS-5033:


On Linux (POSIX), this is what I see:

{noformat}
kasha@keka:~/code/hadoop-trunk (trunk)$ ls -l testfile 
-rw--- 1 root root 8 Jul 29 18:31 testfile
kasha@keka:~/code/hadoop-trunk (trunk)$ cat testfile 
cat: testfile: Permission denied
{noformat}

When accessing local files/dirs through HDFS, the behavior should be the same 
as when using local file system directly.

> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Karthik Kambatla
>Priority: Minor
>  Labels: noob
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723249#comment-13723249
 ] 

Kousuke Saruta commented on HDFS-5033:
--

"Permission Denied" lets malicious users know existence of files. I think if 
one doesn't allow to read a file, we shouldn't let them know.

> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Karthik Kambatla
>Priority: Minor
>  Labels: noob
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5044:
-

Description: 
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target.
I expect that shows like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
link_target

  was:
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target.
I expect  like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
link_target


> dfs -ls should show the character which means symlink and link target
> -
>
> Key: HDFS-5044
> URL: https://issues.apache.org/jira/browse/HDFS-5044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
> Fix For: 3.0.0
>
> Attachments: HDFS-5044.patch
>
>
> In current implementation of HDFS, dfs -ls doesn't show the character which 
> means symlink and also doesn't show symlink target.
> I expect that shows like as follows.
> lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
> link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned HDFS-5044:


Assignee: Kousuke Saruta

> dfs -ls should show the character which means symlink and link target
> -
>
> Key: HDFS-5044
> URL: https://issues.apache.org/jira/browse/HDFS-5044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Fix For: 3.0.0
>
> Attachments: HDFS-5044.patch
>
>
> In current implementation of HDFS, dfs -ls doesn't show the character which 
> means symlink and also doesn't show symlink target.
> I expect that shows like as follows.
> lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
> link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5044:
-

Attachment: HDFS-5044.patch

I've added an initial patch.

> dfs -ls should show the character which means symlink and link target
> -
>
> Key: HDFS-5044
> URL: https://issues.apache.org/jira/browse/HDFS-5044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Fix For: 3.0.0
>
> Attachments: HDFS-5044.patch
>
>
> In current implementation of HDFS, dfs -ls doesn't show the character which 
> means symlink and also doesn't show symlink target.
> I expect that shows like as follows.
> lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
> link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5044:
-

Description: 
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target.
I expect  like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
link_target

  was:
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
link_target


> dfs -ls should show the character which means symlink and link target
> -
>
> Key: HDFS-5044
> URL: https://issues.apache.org/jira/browse/HDFS-5044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
> Fix For: 3.0.0
>
>
> In current implementation of HDFS, dfs -ls doesn't show the character which 
> means symlink and also doesn't show symlink target.
> I expect  like as follows.
> lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
> link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5044:


 Summary: dfs -ls should show the character which means symlink and 
link target
 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0


In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink -> 
link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5025:


Attachment: HDFS-5025.004.patch

Thanks for the review and comments, Suresh and Colin! Update the patch to 
address Suresh's comments. 

For ClientId and UUID part, I added a new class ClientId providing a set of 
helper methods converting bytes/string to string/bytes for UUID-based client ID.

The patch depends on HADOOP-9786, thus will trigger Jenkins later.

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch, HDFS-5025.004.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4772) Add number of children in HdfsFileStatus

2013-07-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723134#comment-13723134
 ] 

Brandon Li commented on HDFS-4772:
--

I've filed HDFS-5403 to address the above mentioned concern.

> Add number of children in HdfsFileStatus
> 
>
> Key: HDFS-4772
> URL: https://issues.apache.org/jira/browse/HDFS-4772
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0, 2.1.0-beta, 2.3.0
>
> Attachments: HDFS-4772.2.patch, HDFS-4772.3.patch, 
> HDFS-4772.branch-2.patch, HDFS-4772.patch
>
>
> This JIRA is to track the change to return the number of children for a 
> directory, so the client doesn't need to make a getListing() call to 
> calculate the number of dirents. This makes it convenient for the client to 
> check directory size change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4772) Add number of children in HdfsFileStatus

2013-07-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723136#comment-13723136
 ] 

Brandon Li commented on HDFS-4772:
--

{quote}I've filed HDFS-5403 to address the above mentioned concern.{quote}
typo: should be HDFS-5043.

> Add number of children in HdfsFileStatus
> 
>
> Key: HDFS-4772
> URL: https://issues.apache.org/jira/browse/HDFS-4772
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0, 2.1.0-beta, 2.3.0
>
> Attachments: HDFS-4772.2.patch, HDFS-4772.3.patch, 
> HDFS-4772.branch-2.patch, HDFS-4772.patch
>
>
> This JIRA is to track the change to return the number of children for a 
> directory, so the client doesn't need to make a getListing() call to 
> calculate the number of dirents. This makes it convenient for the client to 
> check directory size change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5043) For HdfsFileStatus, set default value of childrenNum to -1 instead of 0 to avoid confusing applications

2013-07-29 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5043:


 Summary: For HdfsFileStatus, set default value of childrenNum to 
-1 instead of 0 to avoid confusing applications
 Key: HDFS-5043
 URL: https://issues.apache.org/jira/browse/HDFS-5043
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li


Per discussion in HDFS-4772, default value 0 can confuse application since it 
doesn't know the server doesn't support childNum or the directory has no child.
Use -1 instead to avoid this confusion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723108#comment-13723108
 ] 

Karthik Kambatla commented on HDFS-5033:


The source for put/copyFromLocal is on the client. From a security POV, it 
should be okay to show the "Permission Denied" message. 

> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Karthik Kambatla
>Priority: Minor
>  Labels: noob
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5025:


Attachment: editsStored

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5025:


Status: Open  (was: Patch Available)

Cancel patch to wait for HADOOP-9786.

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5025:


Attachment: (was: editsStored)

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5028) LeaseRenewer throw java.util.ConcurrentModificationException when timeout

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723095#comment-13723095
 ] 

Kousuke Saruta commented on HDFS-5028:
--

Hi zahoyunjiong,
I think your modification may reduce the likelihood of this issue but doesn't 
address essentially.
Instead, how about synchronizing "dfsclients"?
How do you think?

> LeaseRenewer throw java.util.ConcurrentModificationException when timeout
> -
>
> Key: HDFS-5028
> URL: https://issues.apache.org/jira/browse/HDFS-5028
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.1.0, 2.0.0-alpha
>Reporter: zhaoyunjiong
> Fix For: 1.1.3
>
> Attachments: HDFS-5028-branch-1.1.patch, HDFS-5028.patch
>
>
> In LeaseRenewer, when renew() throw SocketTimeoutException, c.abort() will 
> remove one dfsclient from dfsclients. Here will throw a 
> ConcurrentModificationException because dfsclients changed after the iterator 
> created by "for(DFSClient c : dfsclients)":
> Exception in thread "org.apache.hadoop.hdfs.LeaseRenewer$1@75fa1077" 
> java.util.ConcurrentModificationException
> at 
> java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
> at java.util.AbstractList$Itr.next(AbstractList.java:343)
> at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:406)
> at 
> org.apache.hadoop.hdfs.LeaseRenewer.access$600(LeaseRenewer.java:69)
> at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:273)
> at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723022#comment-13723022
 ] 

Kousuke Saruta commented on HDFS-5033:
--

Hi Karthik,

I think the message which means "Permission Denied" should not be displayed on 
the client side from the view point of the security.
Instead, that should be logged on the server side as audit log.

How do you think?

> Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
> to read the source
> ---
>
> Key: HDFS-5033
> URL: https://issues.apache.org/jira/browse/HDFS-5033
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Karthik Kambatla
>Priority: Minor
>  Labels: noob
>
> fs -put/copyFromLocal shows a "No such file or directory" error when the user 
> doesn't have permissions to read the source file/directory. Saying 
> "Permission Denied" is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4772) Add number of children in HdfsFileStatus

2013-07-29 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13723008#comment-13723008
 ] 

Brandon Li commented on HDFS-4772:
--

You are right. It's better to let dfsclient return -1 to the application to 
indicate the childrenNum is not supported by the server. Should we reopen this 
JIRA or create another JIRA for this?

> Add number of children in HdfsFileStatus
> 
>
> Key: HDFS-4772
> URL: https://issues.apache.org/jira/browse/HDFS-4772
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 3.0.0, 2.1.0-beta, 2.3.0
>
> Attachments: HDFS-4772.2.patch, HDFS-4772.3.patch, 
> HDFS-4772.branch-2.patch, HDFS-4772.patch
>
>
> This JIRA is to track the change to return the number of children for a 
> directory, so the client doesn't need to make a getListing() call to 
> calculate the number of dirents. This makes it convenient for the client to 
> check directory size change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5042) Completed files lost after power failure

2013-07-29 Thread Dave Latham (JIRA)
Dave Latham created HDFS-5042:
-

 Summary: Completed files lost after power failure
 Key: HDFS-5042
 URL: https://issues.apache.org/jira/browse/HDFS-5042
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: ext3 on CentOS 5.7 (kernel 2.6.18-274.el5)
Reporter: Dave Latham
Priority: Critical


We suffered a cluster wide power failure after which HDFS lost data that it had 
acknowledged as closed and complete.

The client was HBase which compacted a set of HFiles into a new HFile, then 
after closing the file successfully, deleted the previous versions of the file. 
 The cluster then lost power, and when brought back up the newly created file 
was marked CORRUPT.

Based on reading the logs it looks like the replicas were created by the 
DataNodes in the 'blocksBeingWritten' directory.  Then when the file was closed 
they were moved to the 'current' directory.  After the power cycle those 
replicas were again in the blocksBeingWritten directory of the underlying file 
system (ext3).  When those DataNodes reported in to the NameNode it deleted 
those replicas and lost the file.

Some possible fixes could be having the DataNode fsync the directory(s) after 
moving the block from blocksBeingWritten to current to ensure the rename is 
durable or having the NameNode accept replicas from blocksBeingWritten under 
certain circumstances.

Log snippets from RS (RegionServer), NN (NameNode), DN (DataNode):
{noformat}
RS 2013-06-29 11:16:06,812 DEBUG org.apache.hadoop.hbase.util.FSUtils: Creating 
file=hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
 with permission=rwxrwxrwx
NN 2013-06-29 11:16:06,830 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.allocateBlock: 
/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c.
 blk_1395839728632046111_357084589
DN 2013-06-29 11:16:06,832 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block 
blk_1395839728632046111_357084589 src: /10.0.5.237:14327 dest: /10.0.5.237:50010
NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.0.6.1:50010 is added to 
blk_1395839728632046111_357084589 size 25418340
NN 2013-06-29 11:16:11,370 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.0.6.24:50010 is added to 
blk_1395839728632046111_357084589 size 25418340
NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.0.5.237:50010 is added to 
blk_1395839728632046111_357084589 size 25418340
DN 2013-06-29 11:16:11,385 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: Received block 
blk_1395839728632046111_357084589 of size 25418340 from /10.0.5.237:14327
DN 2013-06-29 11:16:11,385 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block 
blk_1395839728632046111_357084589 terminating
NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: Removing 
lease on  file 
/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
 from client DFSClient_hb_rs_hs745,60020,1372470111932
NN 2013-06-29 11:16:11,385 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.completeFile: file 
/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
 is closed by DFSClient_hb_rs_hs745,60020,1372470111932
RS 2013-06-29 11:16:11,393 INFO org.apache.hadoop.hbase.regionserver.Store: 
Renaming compacted file at 
hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/.tmp/6e0cc30af6e64e56ba5a539fdf159c4c
 to 
hdfs://hm3:9000/hbase/users-6/b5b0820cde759ae68e333b2f4015bb7e/n/6e0cc30af6e64e56ba5a539fdf159c4c
RS 2013-06-29 11:16:11,505 INFO org.apache.hadoop.hbase.regionserver.Store: 
Completed major compaction of 7 file(s) in n of 
users-6,\x12\xBDp\xA3,1359426311784.b5b0820cde759ae68e333b2f4015bb7e. into 
6e0cc30af6e64e56ba5a539fdf159c4c, size=24.2m; total size for store is 24.2m

---  CRASH, RESTART -

NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: addStoredBlock request received for 
blk_1395839728632046111_357084589 on 10.0.6.1:50010 size 21978112 but was 
rejected: Reported as block being written but is a block of closed file.
NN 2013-06-29 12:01:19,743 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addToInvalidates: blk_1395839728632046111 is added to invalidSet of 
10.0.6.1:50010
NN 2013-06-29 12:01:20,155 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: addStoredBlock request received for 
blk_1395839728632046111_357084589 on 10.0.5.237:50010 size 16971264 but was 
rejected: Reported as block being written but is a block of closed file.
NN 2013-06-29 12:01:20,155 INFO org.apache.had

[jira] [Commented] (HDFS-4953) enable HDFS local reads via mmap

2013-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722913#comment-13722913
 ] 

Hadoop QA commented on HDFS-4953:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12594739/HDFS-4953.006.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1153 javac 
compiler warnings (more than the trunk's current 1150 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSShell

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4743//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4743//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4743//console

This message is automatically generated.

> enable HDFS local reads via mmap
> 
>
> Key: HDFS-4953
> URL: https://issues.apache.org/jira/browse/HDFS-4953
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.3.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: benchmark.png, HDFS-4953.001.patch, HDFS-4953.002.patch, 
> HDFS-4953.003.patch, HDFS-4953.004.patch, HDFS-4953.005.patch, 
> HDFS-4953.006.patch
>
>
> Currently, the short-circuit local read pathway allows HDFS clients to access 
> files directly without going through the DataNode.  However, all of these 
> reads involve a copy at the operating system level, since they rely on the 
> read() / pread() / etc family of kernel interfaces.
> We would like to enable HDFS to read local files via mmap.  This would enable 
> truly zero-copy reads.
> In the initial implementation, zero-copy reads will only be performed when 
> checksums were disabled.  Later, we can use the DataNode's cache awareness to 
> only perform zero-copy reads when we know that checksum has already been 
> verified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4513) Clarify WebHDFS REST API that all JSON respsonses may contain additional properties

2013-07-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722865#comment-13722865
 ] 

Alejandro Abdelnur commented on HDFS-4513:
--

Should we also clarify that WebHdfsFileSystem implementation does not enforce 
any additional property not defined in this doc? +1 otherwise.

> Clarify WebHDFS REST API that all JSON respsonses may contain additional 
> properties
> ---
>
> Key: HDFS-4513
> URL: https://issues.apache.org/jira/browse/HDFS-4513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, webhdfs
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h4513_20130513.patch
>
>
> According to Section 5.4 in 
> http://tools.ietf.org/id/draft-zyp-json-schema-03.html, the default value of 
> "additionalProperties" is an empty schema which allows any value for 
> additional properties.  Therefore, all WebHDFS JSON responses allow any 
> additional property since the WebHDFS REST API do not specify 
> additionalProperties.
> However, it is better to clarify in the API that all JSON respsonses may 
> contain additional properties.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722795#comment-13722795
 ] 

Suresh Srinivas commented on HDFS-5025:
---

Some more comments, halfway through the review:
Comments:
# FSEditLog.java - could remove lot of duplicated code with a method like 
logRpcIds(boolean log)
# "add the op into retry cache is necessary" - Did you mean "add the op into 
retry cache if necessary"?
# FSEditLogLoader.java
#* Nit: Unnecessary empty line change in FSEditLogLoader.java
#* For operations that have no payload, currently passing null payload results 
in creating CacheEntryWithPayload instead of CacheEntry. Can you add another 
variant of FSNamesystem#addCacheEntry() which does not require payload and can 
call the corresponding RetryCache variant?
# FSEditLogOp.java
#* reference to methods in javadoc, hasClientId() and hasCallId() that does not 
exist.
#* FSEditLogOp.java should not use UUID notions. It should only stick to the 
notion of ClientID?
#* Instead of AtMostOnce in // comments, can you turn it into block comments 
and add linke to @AtMostOnce
#* Given FSEditLogOp has members rpcClientId and rpcCallId, the reading of this 
in fromXml() method, which has the same code repeated can be moved to 
FSEditLogOp, right? Similarly the code repeated in readFields() could also be 
moved to common FSEditLog method?


> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5027) On startup, DN should scan volumes in parallel

2013-07-29 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-5027:
-

   Resolution: Fixed
Fix Version/s: 2.1.0-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks a lot for the review, Kihwal. I've just committed this trunk, branch-2, 
and branch-2.1-beta.

> On startup, DN should scan volumes in parallel
> --
>
> Key: HDFS-5027
> URL: https://issues.apache.org/jira/browse/HDFS-5027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.0.4-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 2.1.0-beta
>
> Attachments: HDFS-5027.patch
>
>
> On startup the DN must scan all replicas on all configured volumes before the 
> initial block report to the NN. This is currently done serially, but can be 
> done in parallel to improve startup time of the DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722761#comment-13722761
 ] 

Colin Patrick McCabe commented on HDFS-5025:


I agree that StringUtils should not import the rpc stuff.  It seems reasonable 
to have a UUID_BYTE_LENGTH constant in StringUtils itself, and set the RPC 
constant based on that.

> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5041) Add the time of last heartbeat to dead server Web UI

2013-07-29 Thread Ted Yu (JIRA)
Ted Yu created HDFS-5041:


 Summary: Add the time of last heartbeat to dead server Web UI
 Key: HDFS-5041
 URL: https://issues.apache.org/jira/browse/HDFS-5041
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


In Live Server page, there is a column 'Last Contact'.

On the dead server page, similar column can be added which shows when the last 
heartbeat came from the respective dead node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4953) enable HDFS local reads via mmap

2013-07-29 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4953:
---

Attachment: HDFS-4953.006.patch

* rebase on trunk.

* add hdfs-site.xml entries for configuration.

> enable HDFS local reads via mmap
> 
>
> Key: HDFS-4953
> URL: https://issues.apache.org/jira/browse/HDFS-4953
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.3.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: benchmark.png, HDFS-4953.001.patch, HDFS-4953.002.patch, 
> HDFS-4953.003.patch, HDFS-4953.004.patch, HDFS-4953.005.patch, 
> HDFS-4953.006.patch
>
>
> Currently, the short-circuit local read pathway allows HDFS clients to access 
> files directly without going through the DataNode.  However, all of these 
> reads involve a copy at the operating system level, since they rely on the 
> read() / pread() / etc family of kernel interfaces.
> We would like to enable HDFS to read local files via mmap.  This would enable 
> truly zero-copy reads.
> In the initial implementation, zero-copy reads will only be performed when 
> checksums were disabled.  Later, we can use the DataNode's cache awareness to 
> only perform zero-copy reads when we know that checksum has already been 
> verified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5025) Record ClientId and CallId in EditLog to enable rebuilding retry cache in case of HA failover

2013-07-29 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722695#comment-13722695
 ] 

Suresh Srinivas commented on HDFS-5025:
---

Very early comments after reviewing hadoop-common part of the code changes:
# Nit: StringUtils - should not have not of CLIENT_ID_BYTE_LENGTH and should 
not import RpcConstants. I am okay to hard code 16 in this case (see UUID.java 
which does it). Alternatively we can define a new contant for UUID_BYTE_LENGTH 
in StringUtils.java
#* Nit: Change StringUtils#getUuidBytes() should use the same constant defined 
above
# Nit: StringUtils#getUuidFromBytes() - indentation in for loop is incorrect


> Record ClientId and CallId in EditLog to enable rebuilding retry cache in 
> case of HA failover
> -
>
> Key: HDFS-5025
> URL: https://issues.apache.org/jira/browse/HDFS-5025
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: editsStored, HDFS-5025.000.patch, HDFS-5025.001.patch, 
> HDFS-5025.002.patch, HDFS-5025.003.patch
>
>
> In case of HA failover, we need to be able to rebuild the retry cache in the 
> other Namenode. We thus need to record client id and call id in the edit log 
> for those "AtMostOnce" operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4953) enable HDFS local reads via mmap

2013-07-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722634#comment-13722634
 ] 

Colin Patrick McCabe commented on HDFS-4953:


I will add an entry in hdfs-default.xml documenting how to turn off this 
feature.

> enable HDFS local reads via mmap
> 
>
> Key: HDFS-4953
> URL: https://issues.apache.org/jira/browse/HDFS-4953
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.3.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: benchmark.png, HDFS-4953.001.patch, HDFS-4953.002.patch, 
> HDFS-4953.003.patch, HDFS-4953.004.patch, HDFS-4953.005.patch
>
>
> Currently, the short-circuit local read pathway allows HDFS clients to access 
> files directly without going through the DataNode.  However, all of these 
> reads involve a copy at the operating system level, since they rely on the 
> read() / pread() / etc family of kernel interfaces.
> We would like to enable HDFS to read local files via mmap.  This would enable 
> truly zero-copy reads.
> In the initial implementation, zero-copy reads will only be performed when 
> checksums were disabled.  Later, we can use the DataNode's cache awareness to 
> only perform zero-copy reads when we know that checksum has already been 
> verified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4953) enable HDFS local reads via mmap

2013-07-29 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722504#comment-13722504
 ] 

Tsuyoshi OZAWA commented on HDFS-4953:
--

bq. Setting dfs.client.mmap.cache.size to 0 will prevent mmap from happening.
bq. If mmap does not provide a performance improvement on that platform, users 
of that platform can disable it.

OK. Then we should document them explicitly. Should we create new JIRA to do 
that?

> enable HDFS local reads via mmap
> 
>
> Key: HDFS-4953
> URL: https://issues.apache.org/jira/browse/HDFS-4953
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 2.3.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: benchmark.png, HDFS-4953.001.patch, HDFS-4953.002.patch, 
> HDFS-4953.003.patch, HDFS-4953.004.patch, HDFS-4953.005.patch
>
>
> Currently, the short-circuit local read pathway allows HDFS clients to access 
> files directly without going through the DataNode.  However, all of these 
> reads involve a copy at the operating system level, since they rely on the 
> read() / pread() / etc family of kernel interfaces.
> We would like to enable HDFS to read local files via mmap.  This would enable 
> truly zero-copy reads.
> In the initial implementation, zero-copy reads will only be performed when 
> checksums were disabled.  Later, we can use the DataNode's cache awareness to 
> only perform zero-copy reads when we know that checksum has already been 
> verified.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5027) On startup, DN should scan volumes in parallel

2013-07-29 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13722410#comment-13722410
 ] 

Kihwal Lee commented on HDFS-5027:
--

+1 the patch looks good. 

> On startup, DN should scan volumes in parallel
> --
>
> Key: HDFS-5027
> URL: https://issues.apache.org/jira/browse/HDFS-5027
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.0.4-alpha
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HDFS-5027.patch
>
>
> On startup the DN must scan all replicas on all configured volumes before the 
> initial block report to the NN. This is currently done serially, but can be 
> done in parallel to improve startup time of the DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2013-07-29 Thread Raghu C Doppalapudi (JIRA)
Raghu C Doppalapudi created HDFS-5040:
-

 Summary: Audit log for admin commands/ logging output of all DFS 
admin commands
 Key: HDFS-5040
 URL: https://issues.apache.org/jira/browse/HDFS-5040
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Reporter: Raghu C Doppalapudi


enable audit log for all the admin commands/also provide ability to log all the 
admin commands in separate log file, at this point all the logging is displayed 
on the console.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira