[jira] [Updated] (HDFS-9717) NameNode can not update the status of bad block

2016-01-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-9717:
-
Component/s: namenode

> NameNode can not update the status of bad block
> ---
>
> Key: HDFS-9717
> URL: https://issues.apache.org/jira/browse/HDFS-9717
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: tangshangwen
>Assignee: tangshangwen
>
> In our cluster,some users set the number of replicas of file to 1, then back 
> to 2,the file cannot be read,but the NameNode think it is healthy
> {noformat}
> /user/username/dt=2015-11-30/dp=16/part-r-00063.lzo 1513716944 bytes, 12 
> block(s):  Under replicated BP-1422437282658:blk_1897961957_824575827. Target 
> Replicas is 2 but found 1 replica(s).
>  Replica placement policy is violated for 
> BP-1422437282658:blk_1897961957_824575827. Block should be additionally 
> replicated on 1 more rack
> (s).
> 0. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961824_824575694 
> len=134217728 repl=2 [host1:50010, host2:50010]
> 1. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961957_824575827 
> len=134217728 repl=1 [host3:50010]
> 2. BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897962047_824575917 
> len=134217728 repl=2 [host4:50010, host1:50010]
> ..
> Status: HEALTHY
>  Total size:   1513716944 B
>  Total dirs:   0
>  Total files:  1
>  Total symlinks:   0
>  Total blocks (validated): 12 (avg. block size 126143078 B)
>  Minimally replicated blocks:  12 (100.0 %)
>  Over-replicated blocks:   0 (0.0 %)
>  Under-replicated blocks:  1 (8.33 %)
>  Mis-replicated blocks:1 (8.33 %)
>  Default replication factor:   3
>  Average block replication:1.916
>  Corrupt blocks:   0
>  Missing replicas: 1 (4.165 %)
>  Number of data-nodes: 
>  Number of racks:  xxx
> FSCK ended at Thu Jan 28 10:27:49 CST 2016 in 0 milliseconds
> {noformat}
> But the  replica on the datanode has been damaged, can't read,this is 
> datanode log
> {noformat}
> 2016-01-23 06:34:42,737 WARN 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: First 
> Verification failed for 
> BP-1337805335-xxx.xxx.xxx.xxx-1422437282658:blk_1897961957_824575827  
>   
>
> java.io.IOException: Input/output error   
>   
>
> at java.io.FileInputStream.readBytes(Native Method)   
>   
>
> at java.io.FileInputStream.read(FileInputStream.java:272) 
>   
>
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)   
>   
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:529)
>   
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:710)
>   
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:427)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:506)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:667)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:633)
>  
> at 
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:101)
>   
> at java.lang.Thread.run(Thread.java:745)
> --
> 2016-01-28 10:28:37,874 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> DatanodeRegistration(host1, 
> storageID=DS-1450783279-xxx.xxx.xxx.xxx-50010-1432889625435
> , infoPort=50075, ipcPort=50020, 
> storageInfo=lv=-47;cid=CID-3f36397d-b160-4414-b7e4-f37b72e96d53;

[jira] [Updated] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-10-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-9245:
-
Component/s: nfs

> Fix findbugs warnings in hdfs-nfs/WriteCtx
> --
>
> Key: HDFS-9245
> URL: https://issues.apache.org/jira/browse/HDFS-9245
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-9245.000.patch
>
>
> There are findbugs warnings as follows, brought by [HDFS-9092].
> It seems fine to ignore them by write a filter rule in the 
> {{findbugsExcludeFile.xml}} file. 
> {code:xml}
>  instanceHash="592511935f7cb9e5f97ef4c99a6c46c2" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
> {code}
> and
> {code:xml}
>  instanceHash="4f3daa339eb819220f26c998369b02fe" instanceOccurrenceNum="0" 
> priority="2" abbrev="IS" type="IS2_INCONSISTENT_SYNC" cweid="366" 
> instanceOccurrenceMax="0">
> Inconsistent synchronization
> 
> Inconsistent synchronization of 
> org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time
> 
> 
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java" end="314">
> At WriteCtx.java:[lines 40-314]
> 
> In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx
> 
>  name="originalCount" primary="true" signature="I">
>  sourcepath="org/apache/hadoop/hdfs/nfs/nfs3/WriteCtx.java" 
> sourcefile="WriteCtx.java">
> In WriteCtx.java
> 
> 
> Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount
> 
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9092) Nfs silently drops overlapping write requests, thus data copying can't complete

2015-09-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14909087#comment-14909087
 ] 

Brandon Li commented on HDFS-9092:
--

+1. Patch looks good to me. Thank you [~yzhangal]

> Nfs silently drops overlapping write requests, thus data copying can't 
> complete
> ---
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9092.001.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9092) Nfs silently drops overlapping write requests, thus data copying can't complete

2015-09-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14909083#comment-14909083
 ] 

Brandon Li commented on HDFS-9092:
--

For category2, the assumption of the fix is that, the trimmed data is the same 
as what's already written to HDFS.

So far we claim that NFS gateway supports the user cases of file uploading and 
file streaming. 
For file uploading, the overlapped section is safe to drop since it will be the 
same as what's already written to HDFS. It's the same case for file streaming. 

The only possible problem is this: before the patch, if users do random update 
to an HDFS file, NFS gateway will report error. With this patch, there is a 
chance we wont's see the error if it happens that the updated section is 
trimmed.

Since random write is not supported anyway, the possible nicer reaction to a 
random update seems still acceptable to me.   

> Nfs silently drops overlapping write requests, thus data copying can't 
> complete
> ---
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9092.001.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-9092) Nfs silently drops overlapping write requests, thus data copying can't complete

2015-09-17 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14804667#comment-14804667
 ] 

Brandon Li edited comment on HDFS-9092 at 9/17/15 10:51 PM:


Thank you, [~yzhangal] for the patch. Could you roughly describe the idea of 
the fix? possibly by copy&paste the comment from the code to here.


was (Author: brandonli):
Thank you, [~yzhangal] for the patch. Could you roughly describe the idea of 
the fix?

> Nfs silently drops overlapping write requests, thus data copying can't 
> complete
> ---
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9092.001.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9092) Nfs silently drops overlapping write requests, thus data copying can't complete

2015-09-17 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14804667#comment-14804667
 ] 

Brandon Li commented on HDFS-9092:
--

Thank you, [~yzhangal] for the patch. Could you roughly describe the idea of 
the fix?

> Nfs silently drops overlapping write requests, thus data copying can't 
> complete
> ---
>
> Key: HDFS-9092
> URL: https://issues.apache.org/jira/browse/HDFS-9092
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9092.001.patch
>
>
> When NOT using 'sync' option, the NFS writes may issue the following warning:
> org.apache.hadoop.hdfs.nfs.nfs3.OpenFileCtx: Got an overlapping write 
> (1248751616, 1249677312), nextOffset=1248752400. Silently drop it now
> and the size of data copied via NFS will stay at 1248752400.
> Found what happened is:
> 1. The write requests from client are sent asynchronously. 
> 2. The NFS gateway has handler to handle the incoming requests by creating an 
> internal write request structuire and put it into cache;
> 3. In parallel, a separate thread in NFS gateway takes requests out from the 
> cache and writes the data to HDFS.
> The current offset is how much data has been written by the write thread in 
> 3. The detection of overlapping write request happens in 2, but it only 
> checks the write request against the curent offset, and trim the request if 
> necessary. Because the write requests are sent asynchronously, if two 
> requests are beyond the current offset, and they overlap, it's not detected 
> and both are put into the cache. This cause the symptom reported in this case 
> at step 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2015-08-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712494#comment-14712494
 ] 

Brandon Li commented on HDFS-4750:
--

Overlapped writes happen usually for the first block.
For example, a file has 100 bytes. User write this file to NFS mount point. 
After a while (the 100 bytes still in OS buffer cache), the user appends 
another 100 bytes. In this case, the NFS client will rewrite the first block 
(0-199th byte), and thus we see an overlapped write. These kind of case should 
be well handled by current code (but, still might have bugs there :-(   )

Based on your description, the problem seems not the above case. There is one 
case, we currently have not way to control it on the server side:

1. client sends write  0-99, we cache this write and do the write asynchronously
2. client sends another write  180-299, we cache it too, can't write since 
there is a hole
3. client sends another write 100-199, we will do the write since it's a 
sequential write
4. after we finish writing (0-99) and (100-199), we see an overlapped write 
(180-299) in the cache. This is where you see the error message since we are 
expecting another sequential write (200-xxx)

This kind of overlapped write happens very rarely. In case it happens, we have 
multiple copies of the same range (180-198 in the above example). The data 
could be different. When they are different, it could be hard to know which one 
is really expected by the client. 

> Support NFSv3 interface to HDFS
> ---
>
> Key: HDFS-4750
> URL: https://issues.apache.org/jira/browse/HDFS-4750
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-NFS-Proposal.pdf, HDFS-4750.patch, nfs-trunk.patch
>
>
> Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
> integration with client’s file system makes it difficult for users and 
> impossible for some applications to access HDFS. NFS interface support is one 
> way for HDFS to have such easy integration.
> This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
> client, webHDFS and the NFS interface, HDFS will be easier to access and be 
> able support more applications and use cases. 
> We will upload the design document and the initial implementation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8237) Move all protocol classes used by ClientProtocol to hdfs-client

2015-05-04 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14527016#comment-14527016
 ] 

Brandon Li commented on HDFS-8237:
--

+1 pending Jenkins.

> Move all protocol classes used by ClientProtocol to hdfs-client
> ---
>
> Key: HDFS-8237
> URL: https://issues.apache.org/jira/browse/HDFS-8237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: build
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8237.000.patch, HDFS-8237.001.patch, 
> HDFS-8237.002.patch
>
>
> This jira proposes to move the classes in the hdfs project referred by 
> ClientProtocol into the hdfs-client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8200) Refactor FSDirStatAndListingOp

2015-04-30 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14522192#comment-14522192
 ] 

Brandon Li commented on HDFS-8200:
--


+1.

> Refactor FSDirStatAndListingOp
> --
>
> Key: HDFS-8200
> URL: https://issues.apache.org/jira/browse/HDFS-8200
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HDFS-8200.000.patch, HDFS-8200.001.patch
>
>
> After HDFS-6826 several functions in {{FSDirStatAndListingOp}} are dead. This 
> jira proposes to clean them up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8102) Separate webhdfs retry configuration keys from DFSConfigKeys

2015-04-09 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14487801#comment-14487801
 ] 

Brandon Li commented on HDFS-8102:
--

+1.

> Separate webhdfs retry configuration keys from DFSConfigKeys
> 
>
> Key: HDFS-8102
> URL: https://issues.apache.org/jira/browse/HDFS-8102
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
> Attachments: HDFS-8102.000.patch, HDFS-8102.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8001) RpcProgramNfs3 : wrong parsing of dfs.blocksize

2015-04-01 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-8001:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> ---
>
> Key: HDFS-8001
> URL: https://issues.apache.org/jira/browse/HDFS-8001
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.5.2
> Environment: any : windows, linux, etc.
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: easyfix
> Attachments: HDFS-8001.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong 
> to get the dfs.blocksize value, but it should use getLongBytes so it can 
> handle syntax like 64m rather than pure numeric values. DataNode code & 
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8001) RpcProgramNfs3 : wrong parsing of dfs.blocksize

2015-04-01 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-8001:
-
Fix Version/s: 2.7.0

> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> ---
>
> Key: HDFS-8001
> URL: https://issues.apache.org/jira/browse/HDFS-8001
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.5.2
> Environment: any : windows, linux, etc.
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: easyfix
> Fix For: 2.7.0
>
> Attachments: HDFS-8001.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong 
> to get the dfs.blocksize value, but it should use getLongBytes so it can 
> handle syntax like 64m rather than pure numeric values. DataNode code & 
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8001) RpcProgramNfs3 : wrong parsing of dfs.blocksize

2015-04-01 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391849#comment-14391849
 ] 

Brandon Li commented on HDFS-8001:
--

I've committed the patch. Thank you, [~rcatherinot] for the contribution!

> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> ---
>
> Key: HDFS-8001
> URL: https://issues.apache.org/jira/browse/HDFS-8001
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.5.2
> Environment: any : windows, linux, etc.
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: easyfix
> Attachments: HDFS-8001.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong 
> to get the dfs.blocksize value, but it should use getLongBytes so it can 
> handle syntax like 64m rather than pure numeric values. DataNode code & 
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8001) RpcProgramNfs3 : wrong parsing of dfs.blocksize

2015-03-30 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386977#comment-14386977
 ] 

Brandon Li commented on HDFS-8001:
--

+1

> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> ---
>
> Key: HDFS-8001
> URL: https://issues.apache.org/jira/browse/HDFS-8001
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.5.2
> Environment: any : windows, linux, etc.
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: easyfix
> Attachments: HDFS-8001.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong 
> to get the dfs.blocksize value, but it should use getLongBytes so it can 
> handle syntax like 64m rather than pure numeric values. DataNode code & 
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8001) RpcProgramNfs3 : wrong parsing of dfs.blocksize

2015-03-30 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-8001:
-
Assignee: Remi Catherinot

> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> ---
>
> Key: HDFS-8001
> URL: https://issues.apache.org/jira/browse/HDFS-8001
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.5.2
> Environment: any : windows, linux, etc.
>Reporter: Remi Catherinot
>Assignee: Remi Catherinot
>Priority: Trivial
>  Labels: easyfix
> Attachments: HDFS-8001.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong 
> to get the dfs.blocksize value, but it should use getLongBytes so it can 
> handle syntax like 64m rather than pure numeric values. DataNode code & 
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8001) RpcProgramNfs3 : wrong parsing of dfs.blocksize

2015-03-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14384214#comment-14384214
 ] 

Brandon Li commented on HDFS-8001:
--

Thank you, [~rcatherinot], for filing this JIRA. 
What you described makes sense. Do you want to make a patch and upload to this 
JIRA so it can trigger the Jenkins build?

> RpcProgramNfs3 : wrong parsing of dfs.blocksize
> ---
>
> Key: HDFS-8001
> URL: https://issues.apache.org/jira/browse/HDFS-8001
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0, 2.5.2
> Environment: any : windows, linux, etc.
>Reporter: Remi Catherinot
>Priority: Trivial
>  Labels: easyfix
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java use Configuration.getLong 
> to get the dfs.blocksize value, but it should use getLongBytes so it can 
> handle syntax like 64m rather than pure numeric values. DataNode code & 
> others all use getLongBytes.
> it's line 187 in source code.
> detected on version 2.5.2, checked version 2.6.0 which still has the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382612#comment-14382612
 ] 

Brandon Li commented on HDFS-5523:
--

Correct. If we don't have this special case, the administrator can never export 
"/" as long as there is a subdirectory exported (since nested export is 
disallowed).

> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7989) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7989:
-
Attachment: HDFS-7989.002.patch

It's kind of tricky to add unit tests. I manually tested it as the following:
1. start Linux native NFS server
2. try to start NFS Gateway. It couldn't bind the port and shutdowns itself as 
expected.

Uploaded a new patch to fix some class description.

> NFS gateway should shutdown when it can't start UDP or TCP server
> -
>
> Key: HDFS-7989
> URL: https://issues.apache.org/jira/browse/HDFS-7989
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch
>
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382522#comment-14382522
 ] 

Brandon Li commented on HDFS-5523:
--

Yes. We should not always export "/" as one of the exports (though that's what 
we currently have now.) 
We can just allow exporting "/" along with other sub-dirctories. The 
administrator will decide when/whether to export it and which host should have 
the access.

> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7989) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7989:
-
Summary: NFS gateway should shutdown when it can't start UDP or TCP server  
(was: NFS gateway should shutdown when it can't bind the serivce ports)

> NFS gateway should shutdown when it can't start UDP or TCP server
> -
>
> Key: HDFS-7989
> URL: https://issues.apache.org/jira/browse/HDFS-7989
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7989.001.patch
>
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7989) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7989:
-
Status: Patch Available  (was: Open)

> NFS gateway should shutdown when it can't start UDP or TCP server
> -
>
> Key: HDFS-7989
> URL: https://issues.apache.org/jira/browse/HDFS-7989
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7989.001.patch
>
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7989) NFS gateway should shutdown when it can't bind the serivce ports

2015-03-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7989:
-
Attachment: HDFS-7989.001.patch

> NFS gateway should shutdown when it can't bind the serivce ports
> 
>
> Key: HDFS-7989
> URL: https://issues.apache.org/jira/browse/HDFS-7989
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7989.001.patch
>
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7989) NFS gateway should shutdown when it can't bind the serivce ports

2015-03-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7989:
-
Description: Unlike the Portmap, Nfs3 class does shutdown when the service 
can't start.  (was: Unlike the Portmap, Nfs3 class does shutdown even the 
service can't start.)

> NFS gateway should shutdown when it can't bind the serivce ports
> 
>
> Key: HDFS-7989
> URL: https://issues.apache.org/jira/browse/HDFS-7989
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7989) NFS gateway should shutdown when it can't bind the serivce ports

2015-03-25 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7989:


 Summary: NFS gateway should shutdown when it can't bind the 
serivce ports
 Key: HDFS-7989
 URL: https://issues.apache.org/jira/browse/HDFS-7989
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li


Unlike the Portmap, Nfs3 class does shutdown even the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-24 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
Fix Version/s: 2.7.0

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-24 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-24 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14378246#comment-14378246
 ] 

Brandon Li commented on HDFS-7977:
--

Thank you, Haohui, for the review. 
The unit test failure was not introduced by this patch. I've manually verify 
the patch as what's described in the user guide.

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-24 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14378256#comment-14378256
 ] 

Brandon Li commented on HDFS-7977:
--

I've committed the patch.

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-24 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14378209#comment-14378209
 ] 

Brandon Li commented on HDFS-7976:
--

I've committed the patch. Thank you, [~arpitagarwal], for the review!

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-24 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-24 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Fix Version/s: 2.7.0

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
Status: Patch Available  (was: Open)

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
Attachment: HDFS-7977.001.patch

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7977.001.patch
>
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7977:
-
Affects Version/s: (was: 2.6.0)
   2.7.0

> NFS couldn't take percentile intervals
> --
>
> Key: HDFS-7977
> URL: https://issues.apache.org/jira/browse/HDFS-7977
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> The configuration "nfs.metrics.percentiles.intervals" is not recognized by 
> NFS gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7977) NFS couldn't take percentile intervals

2015-03-23 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7977:


 Summary: NFS couldn't take percentile intervals
 Key: HDFS-7977
 URL: https://issues.apache.org/jira/browse/HDFS-7977
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li


The configuration "nfs.metrics.percentiles.intervals" is not recognized by NFS 
gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376889#comment-14376889
 ] 

Brandon Li commented on HDFS-7976:
--

Thank you, Arpit. I've updated the patch to indicate the importance of this 
option for the large file uploading.

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Attachment: HDFS-7976.002.patch

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch, HDFS-7976.002.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Attachment: HDFS-7976.001.patch

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Status: Patch Available  (was: Open)

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7976.001.patch
>
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7976:
-
Description: The mount option "sync" is critical. I observed that this 
mount option can minimize or avoid reordered writes. Mount option "sync" could 
have some negative performance impact on the file uploading. However, it makes 
the performance much more predicable and can also reduce the possibility of 
failures caused by file dumping.  (was: The mount option "sync" is critical. I 
observed that this mount option can minimize or avoid reordered writes. Mount 
option "sync" could have some negative performance impact on the file 
uploading. However, it makes the performance much more predicable and can also 
reduce the possibly of failures caused by file dumping.)

> Update NFS user guide for mount option "sync" to minimize or avoid reordered 
> writes
> ---
>
> Key: HDFS-7976
> URL: https://issues.apache.org/jira/browse/HDFS-7976
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation, nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> The mount option "sync" is critical. I observed that this mount option can 
> minimize or avoid reordered writes. Mount option "sync" could have some 
> negative performance impact on the file uploading. However, it makes the 
> performance much more predicable and can also reduce the possibility of 
> failures caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7976) Update NFS user guide for mount option "sync" to minimize or avoid reordered writes

2015-03-23 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7976:


 Summary: Update NFS user guide for mount option "sync" to minimize 
or avoid reordered writes
 Key: HDFS-7976
 URL: https://issues.apache.org/jira/browse/HDFS-7976
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation, nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li


The mount option "sync" is critical. I observed that this mount option can 
minimize or avoid reordered writes. Mount option "sync" could have some 
negative performance impact on the file uploading. However, it makes the 
performance much more predicable and can also reduce the possibly of failures 
caused by file dumping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-23 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14372340#comment-14372340
 ] 

Brandon Li edited comment on HDFS-5523 at 3/23/15 6:26 PM:
---

Sounds like a good start point. 
Additionally, how about also allowing "/" to be exported as a special case 
along with other subdirectory exports? The benefit is that, it makes it 
convenient for the admin to:
1. directly operate on subdirectories under "/"  
2. do backup of any top sub-directory under "/" without the need of 
share-and-mount them individually
Also, the root export can make it easier to run some applications which require 
accessing multiple top level subdirectories. 




was (Author: brandonli):
Sounds like a good start point. 
Additionally, how about also allowing "/" to be exported as a special case 
along with other subdirectory exports? The benefit is that, it makes it 
convenient for the admin to:
1. directly operate on subdirectories under "/"  
2. do backup of any top sub-directory under "/" without the need of 
share-and-mount them individually
Also, the root export can make it easier for some applications which require 
accessing multiple top level subdirectories. 



> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7942:
-
Fix Version/s: 2.7.0

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-23 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7942:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-23 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376206#comment-14376206
 ] 

Brandon Li commented on HDFS-7942:
--

I've committed the patch. Thank you, Haohui and Jing, for the review!

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-23 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376195#comment-14376195
 ] 

Brandon Li commented on HDFS-7942:
--

The unit test failures are not introduced by this patch.
I'll commit the patch shortly.

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-20 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14372340#comment-14372340
 ] 

Brandon Li commented on HDFS-5523:
--

Sounds like a good start point. 
Additionally, how about also allowing "/" to be exported as a special case 
along with other subdirectory exports? The benefit is that, it makes it 
convenient for the admin to:
1. directly operate on subdirectories under "/"  
2. do backup of any top sub-directory under "/" without the need of 
share-and-mount them individually
Also, the root export can make it easier for some applications which require 
accessing multiple top level subdirectories. 



> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7942:
-
Attachment: HDFS-7942.002.patch

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch, HDFS-7942.002.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-20 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14371760#comment-14371760
 ] 

Brandon Li commented on HDFS-7942:
--

Here is the explanation for wildcard usage in export 
table.(http://linux.die.net/man/5/exports)
{noformat}
wildcards
+Machine names may contain the wildcard characters * and ?, or may contain 
character class lists within [square 
brackets]. This can be used to make the exports file more compact; for 
instance, *.cs.foo.edu matches all hosts in 
the domain cs.foo.edu. As these characters also match the dots in a domain 
name, the given pattern will also 
match all hosts within any subdomain of cs.foo.edu.
{noformat}

Since NFS Gateway uses java regular expression, the usage of wildcards is a bit 
different. For example,
{noformat}
1. instead of "*.cs.foo.edu", one should use "\\w*.cs.foo.edu"
2. instead of "206.190.52.[26|23]", one should use "206.190.52.(26|23)"
{noformat}
I will update the user guide accordingly.

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14370198#comment-14370198
 ] 

Brandon Li commented on HDFS-7942:
--

"+, -" is not expected to be used frequently in hostname or ip address. But I 
could be very wrong here. I just didn't figure out a good way to decide if a 
string has regular expressing.

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7942:
-
Status: Patch Available  (was: Open)

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7942:
-
Attachment: HDFS-7942.001.patch

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14370024#comment-14370024
 ] 

Brandon Li commented on HDFS-7942:
--

Uploaded a patch to support grouping in nfs.exports.allowed.hosts.

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-7942.001.patch
>
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7942) NFS: support regexp grouping in nfs.exports.allowed.hosts

2015-03-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7942:
-
Summary: NFS: support regexp grouping in nfs.exports.allowed.hosts  (was: 
NFS: regex value of nfs.exports.allowed.hosts is not working as expected)

> NFS: support regexp grouping in nfs.exports.allowed.hosts
> -
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-18 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14368136#comment-14368136
 ] 

Brandon Li edited comment on HDFS-5523 at 3/18/15 11:40 PM:


{quote}
What I think we should allow is to put /a/b in the mount table, even thought 
it's not a top-level directory.
{quote}
so your actual proposal is:
1. users can only mount the directories in the export table, and
2. the directories in the export table should not have ancestor-children 
relationship (not nested)







was (Author: brandonli):
{quote}
What I think we should allow is to put /a/b in the mount table, even thought 
it's not a top-level directory.
{quote}
Then we will have the tricky problem as I mentioned above: if /a and /a/b are 
both in export table, "export /a is read-only but /a/b is read-write, when user 
traverse from /a to /a/b, it's tricky to decide which access the user should 
have"

> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-18 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14368136#comment-14368136
 ] 

Brandon Li commented on HDFS-5523:
--

{quote}
What I think we should allow is to put /a/b in the mount table, even thought 
it's not a top-level directory.
{quote}
Then we will have the tricky problem as I mentioned above: if /a and /a/b are 
both in export table, "export /a is read-only but /a/b is read-write, when user 
traverse from /a to /a/b, it's tricky to decide which access the user should 
have"

> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7942) NFS: regex value of nfs.exports.allowed.hosts is not working as expected

2015-03-17 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14366320#comment-14366320
 ] 

Brandon Li commented on HDFS-7942:
--

The regular expression is wrong. Instead of "206.190.52.[26|23]", it should be 
"206.190.52.(26|23)'.
However, the grouping expression "( )" is not recognized by NfsExports class.

> NFS: regex value of nfs.exports.allowed.hosts is not working as expected
> 
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7942) NFS: regex value of nfs.exports.allowed.hosts is not working as expected

2015-03-17 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7942:
-
Description: 
Thanks, [~yeshavora], for reporting this problem.

Set regex value in nfs.exports.allowed.hosts property.

{noformat}
nfs.exports.allowed.hosts206.190.52.[26|23] 
rw
{noformat}

With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and act 
as nfs client. In conclusion, no host can mount nfs with this regex value due 
to access denied error.

{noformat}
>$ sudo su - -c "mount -o 
>soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
>/tmp/tmp_mnt" root
mount.nfs: access denied by server while mounting 206.190.52.23:/
{noformat}

  was:
set regex value in nfs.exports.allowed.hosts property.

{noformat}
nfs.exports.allowed.hosts206.190.52.[26|23] 
rw
{noformat}

With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and act 
as nfs client. In conclusion, no host can mount nfs with this regex value due 
to access denied error.

{noformat}
>$ sudo su - -c "mount -o 
>soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
>/tmp/tmp_mnt" root
mount.nfs: access denied by server while mounting 206.190.52.23:/
{noformat}


> NFS: regex value of nfs.exports.allowed.hosts is not working as expected
> 
>
> Key: HDFS-7942
> URL: https://issues.apache.org/jira/browse/HDFS-7942
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> Thanks, [~yeshavora], for reporting this problem.
> Set regex value in nfs.exports.allowed.hosts property.
> {noformat}
> nfs.exports.allowed.hosts206.190.52.[26|23] 
> rw
> {noformat}
> With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and 
> act as nfs client. In conclusion, no host can mount nfs with this regex value 
> due to access denied error.
> {noformat}
> >$ sudo su - -c "mount -o 
> >soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
> >/tmp/tmp_mnt" root
> mount.nfs: access denied by server while mounting 206.190.52.23:/
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7942) NFS: regex value of nfs.exports.allowed.hosts is not working as expected

2015-03-17 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7942:


 Summary: NFS: regex value of nfs.exports.allowed.hosts is not 
working as expected
 Key: HDFS-7942
 URL: https://issues.apache.org/jira/browse/HDFS-7942
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.6.0
Reporter: Brandon Li
Assignee: Brandon Li


set regex value in nfs.exports.allowed.hosts property.

{noformat}
nfs.exports.allowed.hosts206.190.52.[26|23] 
rw
{noformat}

With this value, neither 206.190.52.26 nor 206.190.52.23 can mount nfs and act 
as nfs client. In conclusion, no host can mount nfs with this regex value due 
to access denied error.

{noformat}
>$ sudo su - -c "mount -o 
>soft,proto=tcp,vers=3,rsize=1048576,wsize=1048576,nolock 206.190.52.23:/ 
>/tmp/tmp_mnt" root
mount.nfs: access denied by server while mounting 206.190.52.23:/
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-16 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364259#comment-14364259
 ] 

Brandon Li commented on HDFS-5523:
--

{quote}We should probably disallow nesting in this phase.{quote}
Sounds ok to me. Otherwise, it would be hard to decide the access mode, for 
example: export /a is read-only but /a/b is read-write, when user traverse from 
/a to /a/b, it's tricky to decide which access the user should have. 

{quote}I think we should allow mounting a non-top-level directory.{quote}
In this case, we can have root as the only mount point and the users can mount 
any sub-directory. This might be the easiest way to implement, but provide only 
marginal security provision. Users can mount any sub-directories  as long as he 
has the right to do mount operation. Usually a root user is needed to do mount, 
so if customers' environment has control over who can do mount, then this 
option is viable solution. Of course, to add a bit more security control, the 
user can choose to export multiple sub-directories instead of only root.


> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14361289#comment-14361289
 ] 

Brandon Li commented on HDFS-5523:
--

[~Rosa], I guess you posted comments to a wrong JIRA.

> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5523) Support multiple subdirectory exports in HDFS NFS gateway

2015-03-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14361225#comment-14361225
 ] 

Brandon Li commented on HDFS-5523:
--

[~zhz], please feel free to take over. I am not working on it currently.

A few things/questions I can think about regarding the support of this feature:
1. if there are multiple exports, each export may need an access setting like 
that in Linux export table
2. do we want to allow exporting both a directory and its subdirectory, e.g., 
export /a and /a/b?
3. if the exports are not allowed to be nested, do we want to allow users to 
mount the subdirectory of the export? e.g., the export is /a,  can user mount 
/a/b even /a/b is not in the export table?



> Support multiple subdirectory exports in HDFS NFS gateway 
> --
>
> Key: HDFS-5523
> URL: https://issues.apache.org/jira/browse/HDFS-5523
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: Brandon Li
>
> Currently, the HDFS NFS Gateway only supports configuring a single 
> subdirectory export via the  {{dfs.nfs3.export.point}} configuration setting. 
> Supporting multiple subdirectory exports can make data and security 
> management easier when using the HDFS NFS Gateway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7926:
-
Fix Version/s: 2.7.0

> NameNode implementation of ClientProtocol.truncate(..) is not idempotent
> 
>
> Key: HDFS-7926
> URL: https://issues.apache.org/jira/browse/HDFS-7926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.7.0
>
> Attachments: h7926_20150313.patch, h7926_20150313b.patch
>
>
> If dfsclient drops the first response of a truncate RPC call, the retry by 
> retry cache will fail with "DFSClient ... is already the current lease 
> holder".  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
> the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14360869#comment-14360869
 ] 

Brandon Li commented on HDFS-7926:
--

Thank you, [~szetszwo], for the fix. I've committed the patch.

> NameNode implementation of ClientProtocol.truncate(..) is not idempotent
> 
>
> Key: HDFS-7926
> URL: https://issues.apache.org/jira/browse/HDFS-7926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h7926_20150313.patch, h7926_20150313b.patch
>
>
> If dfsclient drops the first response of a truncate RPC call, the retry by 
> retry cache will fail with "DFSClient ... is already the current lease 
> holder".  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
> the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7926) NameNode implementation of ClientProtocol.truncate(..) is not idempotent

2015-03-13 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7926:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> NameNode implementation of ClientProtocol.truncate(..) is not idempotent
> 
>
> Key: HDFS-7926
> URL: https://issues.apache.org/jira/browse/HDFS-7926
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h7926_20150313.patch, h7926_20150313b.patch
>
>
> If dfsclient drops the first response of a truncate RPC call, the retry by 
> retry cache will fail with "DFSClient ... is already the current lease 
> holder".  The truncate RPC is annotated as @Idempotent in ClientProtocol but 
> the NameNode implementation is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7925) truncate RPC should not be considered as idempotent

2015-03-12 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7925:
-
Description: 
Currently truncate is considered as an idempotent call in ClientProtocol. 
However, the retried RPC request could get a lease error like following:

2015-03-12 11:45:47,320 INFO  ipc.Server (Server.java:run(2053)) - IPC Server 
handler 6 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.truncate 
from 192.168.76.4:49763 Call#1 Retry#1: 
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to 
TRUNCATE_FILE /user/testuser/testFileTr for DFSClient_NONMAPREDUCE_171671673_1 
on 192.168.76.4 because DFSClient_NONMAPREDUCE_171671673_1 is already the 
current lease holder.



  was:
Currently truncate is considered as an idempotent call in ClientProtocol. 
However, the retried RPC request could get a lease error like following:

2015-03-12 11:45:47,320 INFO  ipc.Server (Server.java:run(2053)) - IPC Server 
handler 6 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.truncate 
from 192.168.76.4:49763 Call#1 Retry#1: 
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to 
TRUNCATE_FILE /user/hrt_qa/testFileTr for DFSClient_NONMAPREDUCE_171671673_1 on 
192.168.76.4 because DFSClient_NONMAPREDUCE_171671673_1 is already the current 
lease holder.




> truncate RPC should not be considered as idempotent
> ---
>
> Key: HDFS-7925
> URL: https://issues.apache.org/jira/browse/HDFS-7925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>
> Currently truncate is considered as an idempotent call in ClientProtocol. 
> However, the retried RPC request could get a lease error like following:
> 2015-03-12 11:45:47,320 INFO  ipc.Server (Server.java:run(2053)) - IPC Server 
> handler 6 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.truncate from 
> 192.168.76.4:49763 Call#1 Retry#1: 
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to 
> TRUNCATE_FILE /user/testuser/testFileTr for 
> DFSClient_NONMAPREDUCE_171671673_1 on 192.168.76.4 because 
> DFSClient_NONMAPREDUCE_171671673_1 is already the current lease holder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7925) truncate RPC should not be considered idempotent

2015-03-12 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7925:


 Summary: truncate RPC should not be considered idempotent
 Key: HDFS-7925
 URL: https://issues.apache.org/jira/browse/HDFS-7925
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: Brandon Li


Currently truncate is considered as an idempotent call in ClientProtocol. 
However, the retried RPC request could get a lease error like following:

2015-03-12 11:45:47,320 INFO  ipc.Server (Server.java:run(2053)) - IPC Server 
handler 6 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.truncate 
from 192.168.76.4:49763 Call#1 Retry#1: 
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to 
TRUNCATE_FILE /user/hrt_qa/testFileTr for DFSClient_NONMAPREDUCE_171671673_1 on 
192.168.76.4 because DFSClient_NONMAPREDUCE_171671673_1 is already the current 
lease holder.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7925) truncate RPC should not be considered as idempotent

2015-03-12 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7925:
-
Summary: truncate RPC should not be considered as idempotent  (was: 
truncate RPC should not be considered idempotent)

> truncate RPC should not be considered as idempotent
> ---
>
> Key: HDFS-7925
> URL: https://issues.apache.org/jira/browse/HDFS-7925
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Brandon Li
>
> Currently truncate is considered as an idempotent call in ClientProtocol. 
> However, the retried RPC request could get a lease error like following:
> 2015-03-12 11:45:47,320 INFO  ipc.Server (Server.java:run(2053)) - IPC Server 
> handler 6 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.truncate from 
> 192.168.76.4:49763 Call#1 Retry#1: 
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: Failed to 
> TRUNCATE_FILE /user/hrt_qa/testFileTr for DFSClient_NONMAPREDUCE_171671673_1 
> on 192.168.76.4 because DFSClient_NONMAPREDUCE_171671673_1 is already the 
> current lease holder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-6488) Support HDFS superuser in NFSv3 gateway

2015-03-06 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14351115#comment-14351115
 ] 

Brandon Li edited comment on HDFS-6488 at 3/6/15 11:28 PM:
---

Thank you, Stephen, Colin, Akira and Jing. I've updated the title and committed 
the patch.


was (Author: brandonli):
Thank you, Stephen, Colin and Jing. I've updated the title and committed the 
patch.

> Support HDFS superuser in NFSv3 gateway
> ---
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) Support HDFS superuser in NFSv3 gateway

2015-03-06 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
Fix Version/s: 2.7.0

> Support HDFS superuser in NFSv3 gateway
> ---
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Fix For: 2.7.0
>
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) Support HDFS superuser in NFSv3 gateway

2015-03-06 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Support HDFS superuser in NFSv3 gateway
> ---
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) Support HDFS superuser in NFSv3 gateway

2015-03-06 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14351115#comment-14351115
 ] 

Brandon Li commented on HDFS-6488:
--

Thank you, Stephen, Colin and Jing. I've updated the title and committed the 
patch.

> Support HDFS superuser in NFSv3 gateway
> ---
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-06 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
Issue Type: New Feature  (was: Bug)

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) Support HDFS superuser in NFSv3 gateway

2015-03-06 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
Summary: Support HDFS superuser in NFSv3 gateway  (was: HDFS superuser 
unable to access user's Trash files using NFSv3 mount)

> Support HDFS superuser in NFSv3 gateway
> ---
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-04 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14347414#comment-14347414
 ] 

Brandon Li commented on HDFS-6488:
--

The unit test failure is not introduced by this patch.

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-03 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
Attachment: HDFS-6488.003.patch

Updated the patch to fix the findbugs warning.

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch, 
> HDFS-6488.003.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344192#comment-14344192
 ] 

Brandon Li commented on HDFS-6488:
--

{quote}
Does it make sense to have the NFS superuser default to the name of the proxy 
user?
{quote}
I don't have a strong reason to say the proxy user should not be the 
super-user. 
Just feel it's more flexible to not enforce it. In many environments, the HDFS 
superuser (e.g., "hdfs") is not a real user account and only created on the 
cluster running hadoop. The users may choose to start the gateway on a client 
node where the HDFS super-user is not configured (or the UID for "hdfs" has 
been used by another account on that host).
Secondly, it will be an incompatible change if we start to enforce the proxy 
user to be the HDFS super-user.


> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
Status: Patch Available  (was: Open)

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344059#comment-14344059
 ] 

Brandon Li commented on HDFS-6488:
--

I've uploaded a new patch to address the comments from @Akira AJISAKA.

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344059#comment-14344059
 ] 

Brandon Li edited comment on HDFS-6488 at 3/2/15 11:59 PM:
---

I've uploaded a new patch to address the comments from [~ajisakaa].


was (Author: brandonli):
I've uploaded a new patch to address the comments from @Akira AJISAKA.

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-03-02 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
Attachment: HDFS-6488.002.patch

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch, HDFS-6488.002.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6445) NFS: Add a log message 'Permission denied' while writing data from read only mountpoint

2015-02-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-6445.
--
Resolution: Duplicate

> NFS: Add a log message 'Permission denied' while writing data from read only 
> mountpoint
> ---
>
> Key: HDFS-6445
> URL: https://issues.apache.org/jira/browse/HDFS-6445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> Add a log message in NFS log file when a write operation is performed on read 
> only mount point
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs
> {noformat}
> bash: cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> The real reason for append operation failure is permission denied. It should 
> be printed in nfs logs. currently, nfs log prints below messages. 
> {noformat}
> 2014-05-22 21:50:56,068 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 7340032 
> length:1048576 stableHow:0 xid:1904385849
> 2014-05-22 21:50:56,076 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1921163065
> 2014-05-22 21:50:56,078 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 8388608 
> length:1048576 stableHow:0 xid:1921163065
> 2014-05-22 21:50:56,086 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1937940281
> 2014-05-22 21:50:56,087 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 9437184 
> length:1048576 stableHow:0 xid:1937940281
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1954717497
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 10485760 
> length:168 stableHow:0 xid:1954717497
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-6445) NFS: Add a log message 'Permission denied' while writing data from read only mountpoint

2015-02-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340658#comment-14340658
 ] 

Brandon Li edited comment on HDFS-6445 at 2/27/15 9:14 PM:
---

I believe it's fixed by HDFS-6411, which did the correct error code mapping.


was (Author: brandonli):
I believe it's fixed by other fixes, which did the correct error code mapping.

> NFS: Add a log message 'Permission denied' while writing data from read only 
> mountpoint
> ---
>
> Key: HDFS-6445
> URL: https://issues.apache.org/jira/browse/HDFS-6445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> Add a log message in NFS log file when a write operation is performed on read 
> only mount point
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs
> {noformat}
> bash: cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> The real reason for append operation failure is permission denied. It should 
> be printed in nfs logs. currently, nfs log prints below messages. 
> {noformat}
> 2014-05-22 21:50:56,068 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 7340032 
> length:1048576 stableHow:0 xid:1904385849
> 2014-05-22 21:50:56,076 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1921163065
> 2014-05-22 21:50:56,078 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 8388608 
> length:1048576 stableHow:0 xid:1921163065
> 2014-05-22 21:50:56,086 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1937940281
> 2014-05-22 21:50:56,087 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 9437184 
> length:1048576 stableHow:0 xid:1937940281
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1954717497
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 10485760 
> length:168 stableHow:0 xid:1954717497
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6446) NFS: Different error messages for appending/writing data from read only mount

2015-02-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HDFS-6446.
--
Resolution: Duplicate

> NFS: Different error messages for appending/writing data from read only mount
> -
>
> Key: HDFS-6446
> URL: https://issues.apache.org/jira/browse/HDFS-6446
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs from read only mount point
> Append data
> {noformat}
> bash$ cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> 4) Write data from read only mount point
> Copy data
> {noformat}
> bash$ cp /tmp/tmp_10MB.txt /tmp/tmp_mnt/tmp/
> cp: cannot create regular file `/tmp/tmp_mnt/tmp/tmp_10MB.txt': Permission 
> denied
> {noformat}
> Both operations are treated differently. Copying data returns valid error 
> message: 'Permission denied' . Though append data does not return valid error 
> message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6446) NFS: Different error messages for appending/writing data from read only mount

2015-02-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li reassigned HDFS-6446:


Assignee: Brandon Li

> NFS: Different error messages for appending/writing data from read only mount
> -
>
> Key: HDFS-6446
> URL: https://issues.apache.org/jira/browse/HDFS-6446
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs from read only mount point
> Append data
> {noformat}
> bash$ cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> 4) Write data from read only mount point
> Copy data
> {noformat}
> bash$ cp /tmp/tmp_10MB.txt /tmp/tmp_mnt/tmp/
> cp: cannot create regular file `/tmp/tmp_mnt/tmp/tmp_10MB.txt': Permission 
> denied
> {noformat}
> Both operations are treated differently. Copying data returns valid error 
> message: 'Permission denied' . Though append data does not return valid error 
> message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6446) NFS: Different error messages for appending/writing data from read only mount

2015-02-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340678#comment-14340678
 ] 

Brandon Li commented on HDFS-6446:
--

I believe this problem has been fixed by HDFS-6411. The error message is the 
same now:

$ cat namenode-metrics.out >> /tmp/mnt/abcd
cat: stdout: Permission denied

$ touch /tmp/mnt/t
touch: /tmp/mnt/t: Permission denied


> NFS: Different error messages for appending/writing data from read only mount
> -
>
> Key: HDFS-6446
> URL: https://issues.apache.org/jira/browse/HDFS-6446
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs from read only mount point
> Append data
> {noformat}
> bash$ cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> 4) Write data from read only mount point
> Copy data
> {noformat}
> bash$ cp /tmp/tmp_10MB.txt /tmp/tmp_mnt/tmp/
> cp: cannot create regular file `/tmp/tmp_mnt/tmp/tmp_10MB.txt': Permission 
> denied
> {noformat}
> Both operations are treated differently. Copying data returns valid error 
> message: 'Permission denied' . Though append data does not return valid error 
> message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6445) NFS: Add a log message 'Permission denied' while writing data from read only mountpoint

2015-02-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340658#comment-14340658
 ] 

Brandon Li commented on HDFS-6445:
--

I believe it's fixed by other fixes, which did the correct error code mapping.

> NFS: Add a log message 'Permission denied' while writing data from read only 
> mountpoint
> ---
>
> Key: HDFS-6445
> URL: https://issues.apache.org/jira/browse/HDFS-6445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> Add a log message in NFS log file when a write operation is performed on read 
> only mount point
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs
> {noformat}
> bash: cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> The real reason for append operation failure is permission denied. It should 
> be printed in nfs logs. currently, nfs log prints below messages. 
> {noformat}
> 2014-05-22 21:50:56,068 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 7340032 
> length:1048576 stableHow:0 xid:1904385849
> 2014-05-22 21:50:56,076 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1921163065
> 2014-05-22 21:50:56,078 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 8388608 
> length:1048576 stableHow:0 xid:1921163065
> 2014-05-22 21:50:56,086 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1937940281
> 2014-05-22 21:50:56,087 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 9437184 
> length:1048576 stableHow:0 xid:1937940281
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1954717497
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 10485760 
> length:168 stableHow:0 xid:1954717497
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6445) NFS: Add a log message 'Permission denied' while writing data from read only mountpoint

2015-02-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340654#comment-14340654
 ] 

Brandon Li commented on HDFS-6445:
--

This problem is not reproducible any more. I tried some tests and got the 
correct error:

$ cat largefile.out >> /tmp/mnt/abcd
cat: stdout: Permission denied


> NFS: Add a log message 'Permission denied' while writing data from read only 
> mountpoint
> ---
>
> Key: HDFS-6445
> URL: https://issues.apache.org/jira/browse/HDFS-6445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Yesha Vora
>
> Add a log message in NFS log file when a write operation is performed on read 
> only mount point
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs
> {noformat}
> bash: cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> The real reason for append operation failure is permission denied. It should 
> be printed in nfs logs. currently, nfs log prints below messages. 
> {noformat}
> 2014-05-22 21:50:56,068 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 7340032 
> length:1048576 stableHow:0 xid:1904385849
> 2014-05-22 21:50:56,076 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1921163065
> 2014-05-22 21:50:56,078 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 8388608 
> length:1048576 stableHow:0 xid:1921163065
> 2014-05-22 21:50:56,086 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1937940281
> 2014-05-22 21:50:56,087 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 9437184 
> length:1048576 stableHow:0 xid:1937940281
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1954717497
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 10485760 
> length:168 stableHow:0 xid:1954717497
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6445) NFS: Add a log message 'Permission denied' while writing data from read only mountpoint

2015-02-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li reassigned HDFS-6445:


Assignee: Brandon Li

> NFS: Add a log message 'Permission denied' while writing data from read only 
> mountpoint
> ---
>
> Key: HDFS-6445
> URL: https://issues.apache.org/jira/browse/HDFS-6445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Reporter: Yesha Vora
>Assignee: Brandon Li
>
> Add a log message in NFS log file when a write operation is performed on read 
> only mount point
> steps:
> 1) set dfs.nfs.exports.allowed.hosts =  ro
> 2) Restart nfs server
> 3) Append data on file present on hdfs
> {noformat}
> bash: cat /tmp/tmp_10MB.txt >> /tmp/tmp_mnt/expected_data_stream
> cat: write error: Input/output error
> {noformat}
> The real reason for append operation failure is permission denied. It should 
> be printed in nfs logs. currently, nfs log prints below messages. 
> {noformat}
> 2014-05-22 21:50:56,068 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 7340032 
> length:1048576 stableHow:0 xid:1904385849
> 2014-05-22 21:50:56,076 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1921163065
> 2014-05-22 21:50:56,078 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 8388608 
> length:1048576 stableHow:0 xid:1921163065
> 2014-05-22 21:50:56,086 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1937940281
> 2014-05-22 21:50:56,087 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 9437184 
> length:1048576 stableHow:0 xid:1937940281
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:handleInternal(1936)) - 
> WRITE_RPC_CALL_START1954717497
> 2014-05-22 21:50:56,091 DEBUG nfs3.RpcProgramNfs3 
> (RpcProgramNfs3.java:write(731)) - NFS WRITE fileId: 16493 offset: 10485760 
> length:168 stableHow:0 xid:1954717497
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-02-27 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li reassigned HDFS-6488:


Assignee: Brandon Li

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>Assignee: Brandon Li
> Attachments: HDFS-6488.001.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-02-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339501#comment-14339501
 ] 

Brandon Li edited comment on HDFS-6488 at 2/27/15 1:09 AM:
---

Uploaded a patch to show the idea.
It's hard to write an end-to-end test for NFS for now, so I manually tested it 
on MacOS and Ubuntu. Here is what I did:
0. configure "nfs.superuser" as "brandon". Start HDFS as user "brandon".
1. with user "test1", create a file and change its permission to 000
2. switch to user "brandon", I can read the file and then delete it.

[~schu], could you please verify the fix in your  environment? 


was (Author: brandonli):
Uploaded a patch to show the idea.
It's hard to write an end-to-end test for NFS for now, so I manually tested it 
on MacOS and Ubuntu. Here is what I did:
0. configure "nfs.superuser" as "brandon"
1. with user "test1", create a file and change its permission to 000
2. switch to user "brandon", I can read the file and then delete it.

[~schu], could you please verify the fix in your  environment? 

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
> Attachments: HDFS-6488.001.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-02-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-6488:
-
Attachment: HDFS-6488.001.patch

Uploaded a patch to show the idea.
It's hard to write an end-to-end test for NFS for now, so I manually tested it 
on MacOS and Ubuntu. Here is what I did:
0. configure "nfs.superuser" as "brandon"
1. with user "test1", create a file and change its permission to 000
2. switch to user "brandon", I can read the file and then delete it.

[~schu], could you please verify the fix in your  environment? 

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
> Attachments: HDFS-6488.001.patch
>
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-02-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339435#comment-14339435
 ] 

Brandon Li commented on HDFS-6488:
--

We could add a configuration property to let user specify the super user if the 
gateway can't identify the super user by itself.

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-02-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339423#comment-14339423
 ] 

Brandon Li commented on HDFS-6488:
--

With non-secure cluster, NFS gateway is stared by the proxy user. For secure 
HDFS cluster, NFS gateway can be started by anyone as long as the user can 
access the kerberos keytab to register as the proxy user. 
Maybe I missed something, but I don't recall any access to NN/DN requires 
superuser privilege in the gateway. In hadoop1, we did have some NN rpc 
(getDiskStatus?) requires superuser privilege. 






> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-02-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337660#comment-14337660
 ] 

Brandon Li commented on HDFS-6488:
--

The question is, how could the gateway know who is the super user. It doesn't 
seems reasonable to assume the gateway is always started by the super user. Any 
suggestions?

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7843) A truncated file is corrupted after rollback from a rolling upgrade

2015-02-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337590#comment-14337590
 ] 

Brandon Li commented on HDFS-7843:
--

+1. The patch looks good to me.

> A truncated file is corrupted after rollback from a rolling upgrade
> ---
>
> Key: HDFS-7843
> URL: https://issues.apache.org/jira/browse/HDFS-7843
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Blocker
> Attachments: h7843_20150226.patch
>
>
> Here is a rolling upgrade truncate test from [~brandonli].  The basic test 
> step is: (3 nodes cluster with HA)
> 1. upload a file to hdfs
> 2. start rollingupgrade. finish rollingupgrade for namenode and one datanode. 
> 3. truncate the file in hdfs to 1byte
> 4. do rollback
> 5. download file from hdfs, check file size to be original size
> I see the file size in hdfs is correct but can't read it because the block is 
> corrupted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6488) HDFS superuser unable to access user's Trash files using NFSv3 mount

2015-02-25 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337480#comment-14337480
 ] 

Brandon Li commented on HDFS-6488:
--

As Colin mentioned, root squash is disabled implicitly in NFS gateway because 
it treats root or hdfs as a regular user and pass it directly to HDFS.  The 
purpose of HDFS-6498 is to squash any user or a user range in order to make the 
static mapping easier to use.

One possible way to treat "hdfs"( the configured super user or HDFS namespace 
owner) as a super user and always give it access to any file system objects is 
to return all access permission in ACCESS call. I did a quick test on MacOS, it 
seems to work.

> HDFS superuser unable to access user's Trash files using NFSv3 mount
> 
>
> Key: HDFS-6488
> URL: https://issues.apache.org/jira/browse/HDFS-6488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.3.0
>Reporter: Stephen Chu
>
> As hdfs superuseruser on the NFS mount, I cannot cd or ls the 
> /user/schu/.Trash directory:
> {code}
> bash-4.1$ cd .Trash/
> bash: cd: .Trash/: Permission denied
> bash-4.1$ ls -la
> total 2
> drwxr-xr-x 4 schu 2584148964 128 Jan  7 10:42 .
> drwxr-xr-x 4 hdfs 2584148964 128 Jan  6 16:59 ..
> drwx-- 2 schu 2584148964  64 Jan  7 10:45 .Trash
> drwxr-xr-x 2 hdfs hdfs64 Jan  7 10:42 tt
> bash-4.1$ ls .Trash
> ls: cannot open directory .Trash: Permission denied
> bash-4.1$
> {code}
> When using FsShell as hdfs superuser, I have superuser permissions to schu's 
> .Trash contents:
> {code}
> bash-4.1$ hdfs dfs -ls -R /user/schu/.Trash
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user
> drwx--   - schu supergroup  0 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu
> -rw-r--r--   1 schu supergroup  4 2014-01-07 10:48 
> /user/schu/.Trash/Current/user/schu/tf1
> {code}
> The NFSv3 logs don't produce any error when superuser tries to access schu 
> Trash contents. However, for other permission errors (e.g. schu tries to 
> delete a directory owned by hdfs), there will be a permission error in the 
> logs.
> I think this is not specific to the .Trash directory perhaps.
> I created a /user/schu/dir1 which has the same permissions as .Trash (700). 
> When I try cd'ing into the directory from the NFSv3 mount as hdfs superuser, 
> I get the same permission denied.
> {code}
> [schu@hdfs-nfs ~]$ hdfs dfs -ls
> Found 4 items
> drwx--   - schu supergroup  0 2014-01-07 10:57 .Trash
> drwx--   - schu supergroup  0 2014-01-07 11:05 dir1
> -rw-r--r--   1 schu supergroup  4 2014-01-07 11:05 tf1
> drwxr-xr-x   - hdfs hdfs0 2014-01-07 10:42 tt
> bash-4.1$ whoami
> hdfs
> bash-4.1$ pwd
> /hdfs_nfs_mount/user/schu
> bash-4.1$ cd dir1
> bash: cd: dir1: Permission denied
> bash-4.1$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7733) NFS: readdir/readdirplus return null directory attribute on failure

2015-02-04 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14305682#comment-14305682
 ] 

Brandon Li commented on HDFS-7733:
--

+1. The patch looks good to me. Thank you, Arpit, for the fix.

> NFS: readdir/readdirplus return null directory attribute on failure
> ---
>
> Key: HDFS-7733
> URL: https://issues.apache.org/jira/browse/HDFS-7733
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.6.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-7733.01.patch
>
>
> NFS readdir and readdirplus operations return a null directory attribute on 
> some failure paths. This causes clients to get a 'Stale file handle' error 
> which can only be fixed by unmounting and remounting the share.
> The issue can be reproduced by running 'ls' against a large directory which 
> is being actively modified, triggering the 'cookie mismatch' failure path.
> {code}
> } else {
>   LOG.error("cookieverf mismatch. request cookieverf: " + cookieVerf
>   + " dir cookieverf: " + dirStatus.getModificationTime());
>   return new READDIRPLUS3Response(Nfs3Status.NFS3ERR_BAD_COOKIE);
> }
> {code}
> Thanks to [~brandonli] for catching the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7696) FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors

2015-01-30 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14299518#comment-14299518
 ] 

Brandon Li commented on HDFS-7696:
--

+1. The patch looks good to me.

> FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors
> --
>
> Key: HDFS-7696
> URL: https://issues.apache.org/jira/browse/HDFS-7696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h7696_20150128.patch
>
>
> getTmpInputStreams(..) opens a block file and a meta file, and then return 
> them as ReplicaInputStreams.  The caller responses to closes those streams.  
> In case of errors, an exception is thrown without closing the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7640) print NFS Client in the NFS log

2015-01-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283179#comment-14283179
 ] 

Brandon Li commented on HDFS-7640:
--

Thank you, Yongjun.

> print NFS Client in the NFS log
> ---
>
> Key: HDFS-7640
> URL: https://issues.apache.org/jira/browse/HDFS-7640
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Trivial
> Attachments: HDFS-7640.001.patch
>
>
> Currently hdfs-nfs logs does not have any information about nfs clients.
> When multiple clients are using nfs, it becomes hard to distinguish which 
> request came from which client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   8   9   10   >