[jira] [Updated] (HDFS-6270) Secondary namenode status page shows transaction count in bytes

2014-04-25 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-6270:
---

Status: Patch Available  (was: Open)

 Secondary namenode status page shows transaction count in bytes
 ---

 Key: HDFS-6270
 URL: https://issues.apache.org/jira/browse/HDFS-6270
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HDFS-6270.patch, HDFS-6270.patch


 Though checkpoint trigger is changed from edit log size to transaction count 
 , the SN UI still shows the limit in terms of bytes
 It appears as :
 Checkpoint Period: 3600 seconds
 Checkpoint Size  : 976.56 KB (= 100 bytes)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6270) Secondary namenode status page shows transaction count in bytes

2014-04-25 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-6270:
---

Status: Open  (was: Patch Available)

 Secondary namenode status page shows transaction count in bytes
 ---

 Key: HDFS-6270
 URL: https://issues.apache.org/jira/browse/HDFS-6270
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HDFS-6270.patch, HDFS-6270.patch


 Though checkpoint trigger is changed from edit log size to transaction count 
 , the SN UI still shows the limit in terms of bytes
 It appears as :
 Checkpoint Period: 3600 seconds
 Checkpoint Size  : 976.56 KB (= 100 bytes)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6210) Support GETACLSTATUS operation in WebImageViewer

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980737#comment-13980737
 ] 

Hudson commented on HDFS-6210:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5571 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5571/])
HDFS-6210. Support GETACLSTATUS operation in WebImageViewer. Contributed by 
Akira Ajisaka. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589933)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java


 Support GETACLSTATUS operation in WebImageViewer
 

 Key: HDFS-6210
 URL: https://issues.apache.org/jira/browse/HDFS-6210
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6210.2.patch, HDFS-6210.3.patch, HDFS-6210.4.patch, 
 HDFS-6210.patch, HDFS-6210.patch


 In HDFS-6170, I found {{GETACLSTATUS}} operation support is also required to 
 execute hdfs dfs -ls to WebImageViewer.
 {code}
 [root@trunk ~]# hdfs dfs -ls webhdfs://localhost:5978/
 14/04/09 11:53:04 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Found 1 items
 ls: Unexpected HTTP response: code=400 != 200, op=GETACLSTATUS, message=Bad 
 Request
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5851) Support memory as a storage medium

2014-04-25 Thread eric baldeschwieler (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980741#comment-13980741
 ] 

eric baldeschwieler commented on HDFS-5851:
---

The case of a local short circuit read having access to the open file is 
interesting...  does this pin the memory until the possibly misbehaved client 
process closes the socket / FD?

Single replicas?  Why would one want to triple replicate discardable memory?  
One should at least have the option to only keep a single local copy in HDFS.

If we can not prevent random access writes to DDM (we could presumably limit 
this in client API), then I don't think we can checksum or replicate until a 
file is closed.  My gut is delaying such until close is the right call...

How are discarded or lost (node fails) blocks / files handled?  Do the names 
remain in the NN and get reported in FSCK and other operations?  We want to be 
sure this doesn't add work to operators.  

Can we make these files transient like ZK ephemeral nodes?

Once one assumes you don't need to replicate discardable files, then one can 
think about allocating only an arena name (think directory) in the NN and then 
creating individual files only at the DN, limiting NN interaction.  This would 
be a lot faster.  (You could still have remote access via 
.../ARENA/DN-NAME/name style URLs.)  With this you could vastly reduce NN 
interactions, which is probably good for latency reduction and scalability.  
You could then imagine using this mechanism for MR / Tez / Spark shuffle files 
...  which has been a long term project goal...  Maybe we should break this 
idea out into another JIRA... ?  happy to chat if folks want to flesh this out.

Involving Yarn in HDFS resource management is interestingly circular.  Is this 
needed?  One would want the right abstraction to allow other solutions to be 
applied to Yarnless deployments.

 Support memory as a storage medium
 --

 Key: HDFS-5851
 URL: https://issues.apache.org/jira/browse/HDFS-5851
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: 
 SupportingMemoryStorageinHDFSPersistentandDiscardableMemory.pdf


 Memory can be used as a storage medium for smaller/transient files for fast 
 write throughput.
 More information/design will be added later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6285) tidy an error log inside BlockReceiver

2014-04-25 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-6285:


Attachment: HDFS-6285.txt

 tidy an error log inside BlockReceiver
 --

 Key: HDFS-6285
 URL: https://issues.apache.org/jira/browse/HDFS-6285
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.4.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HDFS-6285.txt


 From this log from our production cluster:
 2014-04-22,10:39:05,476 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in BlockReceiver constructor. Cause is 
 After reading code, i knew the cause was null which means no disk error. but 
 the above log looked fragmentary. Attached is a minor change to tidy it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6285) tidy an error log inside BlockReceiver

2014-04-25 Thread Liang Xie (JIRA)
Liang Xie created HDFS-6285:
---

 Summary: tidy an error log inside BlockReceiver
 Key: HDFS-6285
 URL: https://issues.apache.org/jira/browse/HDFS-6285
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.4.0, 3.0.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HDFS-6285.txt

From this log from our production cluster:
2014-04-22,10:39:05,476 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
IOException in BlockReceiver constructor. Cause is 

After reading code, i knew the cause was null which means no disk error. but 
the above log looked fragmentary. Attached is a minor change to tidy it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6285) tidy an error log inside BlockReceiver

2014-04-25 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-6285:


Status: Patch Available  (was: Open)

 tidy an error log inside BlockReceiver
 --

 Key: HDFS-6285
 URL: https://issues.apache.org/jira/browse/HDFS-6285
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.4.0, 3.0.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HDFS-6285.txt


 From this log from our production cluster:
 2014-04-22,10:39:05,476 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in BlockReceiver constructor. Cause is 
 After reading code, i knew the cause was null which means no disk error. but 
 the above log looked fragmentary. Attached is a minor change to tidy it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2014-04-25 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4167:


Attachment: HDFS-4167.000.patch

Here is a very initial patch that only provides NameNode side functionalities. 
In general, this patch provides a restoreSnapshot call in FSNamesystem, which 
restores a file/directory to the most recent snapshot of the corresponding 
snapshottable directory. In particular, the patch tries to do the following:
1. For a directory, revert its metadata change, restore deleted children and 
delete newly created files/subdirs.
2. For a file, revert its metadata change, delete blocks that were created 
after the snapshot.
3. For a renamed file/dir, if the target of the rename operation is also under 
the restore root directory, rename the file/dir back. Otherwise keep tracking 
the renamed file/dir in the deleted list of the snapshot diff.
4. Update quota correspondingly.

Note that the snapshot must be the most recent one. We throw exceptions if 
there are intermediate snapshot. This is the same behavior with ZFS.

Remaining work:
1. Unit tests and bug fixes.
2. Protocol change and FileSystem API.
3. CLI support
4. We also need to figure out how to handle the last block when the snapshot 
was taken. Do we want to truncate the block to make the restored file's length 
consistent with the length recorded in snapshot? But because of the current 
snapshot's copy-on-write semantic, the recorded file length is also not 
accurate. We can only guarantee that the file length recorded in the snapshot 
is no less than the real file length at the time when the snapshot was taken.

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS Design Proposal.pdf, HDFS-4167.000.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6286) adding a timeout setting for local read io

2014-04-25 Thread Liang Xie (JIRA)
Liang Xie created HDFS-6286:
---

 Summary: adding a timeout setting for local read io
 Key: HDFS-6286
 URL: https://issues.apache.org/jira/browse/HDFS-6286
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.4.0, 3.0.0
Reporter: Liang Xie
Assignee: Liang Xie


Currently, if a write or remote read requested into a sick disk, 
DFSClient.hdfsTimeout could help the caller have a guaranteed time cost to 
return back. but it doesn't work on local read. Take HBase scan for example,
DFSInputStream.read - readWithStrategy - readBuffer - BlockReaderLocal.read 
-  dataIn.read - FileChannelImpl.read
if it hits a bad disk, the low read io probably takes tens of seconds,  and 
what's worse is, the DFSInputStream.read hold a lock always.
Per my knowledge, there's no good mechanism to cancel a running read io(Please 
correct me if it's wrong), so my opinion is adding a future around the read 
request, and we could set a timeout there, if the threshold reached, we can add 
the local node into deadnode probably...
Any thought?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5865) Update OfflineImageViewer document

2014-04-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5865:


Attachment: HDFS-5865.2.patch

Attaching a patch to add the description of the Web processor.

 Update OfflineImageViewer document
 --

 Key: HDFS-5865
 URL: https://issues.apache.org/jira/browse/HDFS-5865
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-5865.2.patch, HDFS-5865.patch


 OfflineImageViewer is renewed to handle the new format of fsimage by 
 HDFS-5698 (fsimage in protobuf).
 We should document followings:
 * The tool can handle the layout version of Hadoop 2.4 and up. (If you want 
 to handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
 * Remove deprecated options such as Delimited and Indented processor.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5865) Update OfflineImageViewer document

2014-04-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5865:


Status: Patch Available  (was: Open)

 Update OfflineImageViewer document
 --

 Key: HDFS-5865
 URL: https://issues.apache.org/jira/browse/HDFS-5865
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-5865.2.patch, HDFS-5865.patch


 OfflineImageViewer is renewed to handle the new format of fsimage by 
 HDFS-5698 (fsimage in protobuf).
 We should document followings:
 * The tool can handle the layout version of Hadoop 2.4 and up. (If you want 
 to handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
 * Remove deprecated options such as Delimited and Indented processor.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5865) Update OfflineImageViewer document

2014-04-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5865:


Description: 
OfflineImageViewer is renewed to handle the new format of fsimage by HDFS-5698 
(fsimage in protobuf).

We should document the followings:

* The tool can handle the layout version of Hadoop 2.4 and up. (If you want to 
handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
* Delimited, Indented, and Ls processor were removed.
* A new Web processor, which supersedes the Ls processor, was added.

  was:
OfflineImageViewer is renewed to handle the new format of fsimage by HDFS-5698 
(fsimage in protobuf).

We should document followings:

* The tool can handle the layout version of Hadoop 2.4 and up. (If you want to 
handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
* Remove deprecated options such as Delimited and Indented processor.



 Update OfflineImageViewer document
 --

 Key: HDFS-5865
 URL: https://issues.apache.org/jira/browse/HDFS-5865
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-5865.2.patch, HDFS-5865.patch


 OfflineImageViewer is renewed to handle the new format of fsimage by 
 HDFS-5698 (fsimage in protobuf).
 We should document the followings:
 * The tool can handle the layout version of Hadoop 2.4 and up. (If you want 
 to handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
 * Delimited, Indented, and Ls processor were removed.
 * A new Web processor, which supersedes the Ls processor, was added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6110) adding more slow action log in critical write path

2014-04-25 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-6110:


Attachment: HDFS-6110v5.txt

 adding more slow action log in critical write path
 --

 Key: HDFS-6110
 URL: https://issues.apache.org/jira/browse/HDFS-6110
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.3.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HDFS-6110-v2.txt, HDFS-6110.txt, HDFS-6110v3.txt, 
 HDFS-6110v4.txt, HDFS-6110v5.txt


 After digging a HBase write spike issue caused by slow buffer io in our 
 cluster, just realize we'd better to add more abnormal latency warning log in 
 write flow, such that if other guys hit HLog sync spike, we could know more 
 detail info from HDFS side at the same time.
 Patch will be uploaded soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6110) adding more slow action log in critical write path

2014-04-25 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-6110:


Attachment: HDFS-6110v5.txt

 adding more slow action log in critical write path
 --

 Key: HDFS-6110
 URL: https://issues.apache.org/jira/browse/HDFS-6110
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.3.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HDFS-6110-v2.txt, HDFS-6110.txt, HDFS-6110v3.txt, 
 HDFS-6110v4.txt, HDFS-6110v5.txt


 After digging a HBase write spike issue caused by slow buffer io in our 
 cluster, just realize we'd better to add more abnormal latency warning log in 
 write flow, such that if other guys hit HLog sync spike, we could know more 
 detail info from HDFS side at the same time.
 Patch will be uploaded soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6110) adding more slow action log in critical write path

2014-04-25 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-6110:


Attachment: (was: HDFS-6110v5.txt)

 adding more slow action log in critical write path
 --

 Key: HDFS-6110
 URL: https://issues.apache.org/jira/browse/HDFS-6110
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.3.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HDFS-6110-v2.txt, HDFS-6110.txt, HDFS-6110v3.txt, 
 HDFS-6110v4.txt, HDFS-6110v5.txt


 After digging a HBase write spike issue caused by slow buffer io in our 
 cluster, just realize we'd better to add more abnormal latency warning log in 
 write flow, such that if other guys hit HLog sync spike, we could know more 
 detail info from HDFS side at the same time.
 Patch will be uploaded soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6110) adding more slow action log in critical write path

2014-04-25 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980759#comment-13980759
 ] 

Liang Xie commented on HDFS-6110:
-

Hi [~cmccabe], attached v5 should address your comments, thanks:)

 adding more slow action log in critical write path
 --

 Key: HDFS-6110
 URL: https://issues.apache.org/jira/browse/HDFS-6110
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.3.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HDFS-6110-v2.txt, HDFS-6110.txt, HDFS-6110v3.txt, 
 HDFS-6110v4.txt, HDFS-6110v5.txt


 After digging a HBase write spike issue caused by slow buffer io in our 
 cluster, just realize we'd better to add more abnormal latency warning log in 
 write flow, such that if other guys hit HLog sync spike, we could know more 
 detail info from HDFS side at the same time.
 Patch will be uploaded soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6210) Support GETACLSTATUS operation in WebImageViewer

2014-04-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6210:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~ajisakaa] for the 
contribution.

 Support GETACLSTATUS operation in WebImageViewer
 

 Key: HDFS-6210
 URL: https://issues.apache.org/jira/browse/HDFS-6210
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6210.2.patch, HDFS-6210.3.patch, HDFS-6210.4.patch, 
 HDFS-6210.patch, HDFS-6210.patch


 In HDFS-6170, I found {{GETACLSTATUS}} operation support is also required to 
 execute hdfs dfs -ls to WebImageViewer.
 {code}
 [root@trunk ~]# hdfs dfs -ls webhdfs://localhost:5978/
 14/04/09 11:53:04 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Found 1 items
 ls: Unexpected HTTP response: code=400 != 200, op=GETACLSTATUS, message=Bad 
 Request
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6270) Secondary namenode status page shows transaction count in bytes

2014-04-25 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980764#comment-13980764
 ] 

Haohui Mai commented on HDFS-6270:
--

[~benoyantony], can you provide a patch to the new UI of the SNN as well (See 
HDFS-6278)?  Thanks.

 Secondary namenode status page shows transaction count in bytes
 ---

 Key: HDFS-6270
 URL: https://issues.apache.org/jira/browse/HDFS-6270
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HDFS-6270.patch, HDFS-6270.patch


 Though checkpoint trigger is changed from edit log size to transaction count 
 , the SN UI still shows the limit in terms of bytes
 It appears as :
 Checkpoint Period: 3600 seconds
 Checkpoint Size  : 976.56 KB (= 100 bytes)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6252) Namenode old webUI should be deprecated

2014-04-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6252:
-

Attachment: HDFS-6252.003.patch

 Namenode old webUI should be deprecated
 ---

 Key: HDFS-6252
 URL: https://issues.apache.org/jira/browse/HDFS-6252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
 HDFS-6252.002.patch, HDFS-6252.003.patch


 We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
 from download this file on the browseDirectory.jsp, it will throw an error:
 Problem accessing /streamFile/***
 because streamFile servlet was deleted in HDFS-5570.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6261) Add document for enabling node group layer in HDFS

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980790#comment-13980790
 ] 

Hadoop QA commented on HDFS-6261:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641871/HDFS-6261.v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6728//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6728//console

This message is automatically generated.

 Add document for enabling node group layer in HDFS
 --

 Key: HDFS-6261
 URL: https://issues.apache.org/jira/browse/HDFS-6261
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation
Reporter: Wenwu Peng
Assignee: Binglin Chang
  Labels: documentation
 Attachments: 3layer-topology.png, 4layer-topology.png, 
 HDFS-6261.v1.patch, HDFS-6261.v1.patch


 Most of patches from Umbrella JIRA HADOOP-8468  have committed, However there 
 is no site to introduce NodeGroup-aware(HADOOP Virtualization Extensisons) 
 and how to do configuration. so we need to doc it.
 1.  Doc NodeGroup-aware relate in http://hadoop.apache.org/docs/current 
 2.  Doc NodeGroup-aware properties in core-default.xml.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path

2014-04-25 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-6133:
---

Attachment: (was: HDFS-6133.patch)

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6133) Make Balancer support exclude specified path

2014-04-25 Thread zhaoyunjiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhaoyunjiong updated HDFS-6133:
---

Attachment: HDFS-6133.patch

Upload patch according to comments.

By the way, do we have new BM service design?



 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6254) hdfsConnect segment fault where namenode not connected

2014-04-25 Thread huang ken (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980812#comment-13980812
 ] 

huang ken commented on HDFS-6254:
-

Chris Nauroth, thanks a lot for your explanation about gdb SIGSEGV handling  
programming advice.
But bhdfsConnect/b or bhdfsBuilderConnect/b do return bnot NULL/b 
while namenode not connected in my test, not the same as the declaration in 
hdfs.h. Is it right ?

 hdfsConnect segment fault where namenode not connected
 --

 Key: HDFS-6254
 URL: https://issues.apache.org/jira/browse/HDFS-6254
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.2.0
 Environment: Linux Centos 64bit
Reporter: huang ken
Assignee: Chris Nauroth

 When namenode is not started, the libhdfs client will cause segment fault 
 while connecting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6285) tidy an error log inside BlockReceiver

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980841#comment-13980841
 ] 

Hadoop QA commented on HDFS-6285:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641876/HDFS-6285.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6729//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6729//console

This message is automatically generated.

 tidy an error log inside BlockReceiver
 --

 Key: HDFS-6285
 URL: https://issues.apache.org/jira/browse/HDFS-6285
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.4.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HDFS-6285.txt


 From this log from our production cluster:
 2014-04-22,10:39:05,476 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 IOException in BlockReceiver constructor. Cause is 
 After reading code, i knew the cause was null which means no disk error. but 
 the above log looked fragmentary. Attached is a minor change to tidy it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6110) adding more slow action log in critical write path

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980873#comment-13980873
 ] 

Hadoop QA commented on HDFS-6110:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641883/HDFS-6110v5.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6730//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6730//console

This message is automatically generated.

 adding more slow action log in critical write path
 --

 Key: HDFS-6110
 URL: https://issues.apache.org/jira/browse/HDFS-6110
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.3.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HDFS-6110-v2.txt, HDFS-6110.txt, HDFS-6110v3.txt, 
 HDFS-6110v4.txt, HDFS-6110v5.txt


 After digging a HBase write spike issue caused by slow buffer io in our 
 cluster, just realize we'd better to add more abnormal latency warning log in 
 write flow, such that if other guys hit HLog sync spike, we could know more 
 detail info from HDFS side at the same time.
 Patch will be uploaded soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6262) HDFS doesn't raise FileNotFoundException if the source of a rename() is missing

2014-04-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980876#comment-13980876
 ] 

Steve Loughran commented on HDFS-6262:
--

Suresh, thanks for the link to HADOOP-6240 -I hadn't seen that. But: *every 
other filesystem* considers renaming a file that doesn't exist to be an error.

Do we have any examples where failing to fault on renaming a nonexistent file 
is NOT an error to flag up? 

Looking at the hadoop production source
* {{org.apache.hadoop.fs.shell.MoveCommands}} says we have no way to know the 
actual error... and throws a {{PathIOException}}
* {{org.apache.hadoop.fs.shell.CommandWithDestination}} says too bad we don't 
know why it failed and does the same
* {{org.apache.hadoop.io.MapFile}} raises an IOException
* {{org.apache.hadoop.tools.mapred.CopyCommitter}} raises an IOE, as does 
{{org.apache.hadoop.tools.mapred.RetriableFileCopyCommand}}

Similar behaviour for: 
{code}
LocalContainerLauncher, DistCpV1
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService, 
...
{code}

and those that blindly assume that rename's return value doesn't need checking
{code}
JobHistoryEventHandler
TaskLog (on localFS though)
org.apache.hadoop.mapreduce.task.reduce.OnDiskMapOutput
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore

{code}

In fact. the only bit of code I can see that converts the false return code to 
a warning is {{org.apache.hadoop.tools.mapred.lib.DynamicInputChunk}}

To summarise, in the Hadoop production code, in all but one case the handling 
of a false return code takes two forms
# triggers the throwing of a that failed but we don't know why {{IOException}}
# is blissfully ignorant that the operation has failed, and has so far been 
lucky in avoiding concurrency problems with their source being renamed while 
they weren't looking.

All of these uses benefit from having rename consistently throw a 
FileNotFoundException if the source file isn't there




 HDFS doesn't raise FileNotFoundException if the source of a rename() is 
 missing
 ---

 Key: HDFS-6262
 URL: https://issues.apache.org/jira/browse/HDFS-6262
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
 Attachments: HDFS-6262.2.patch, HDFS-6262.patch


 HDFS's {{rename(src, dest)}} returns false if src does not exist -all the 
 other filesystems raise {{FileNotFoundException}}
 This behaviour is defined in {{FSDirectory.unprotectedRenameTo()}} -the 
 attempt is logged, but the operation then just returns false.
 I propose changing the behaviour of {{DistributedFileSystem}} to be the same 
 as that of the others -and of {{FileContext}}, which does reject renames with 
 nonexistent sources



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5865) Update OfflineImageViewer document

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980881#comment-13980881
 ] 

Hadoop QA commented on HDFS-5865:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641879/HDFS-5865.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6731//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6731//console

This message is automatically generated.

 Update OfflineImageViewer document
 --

 Key: HDFS-5865
 URL: https://issues.apache.org/jira/browse/HDFS-5865
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-5865.2.patch, HDFS-5865.patch


 OfflineImageViewer is renewed to handle the new format of fsimage by 
 HDFS-5698 (fsimage in protobuf).
 We should document the followings:
 * The tool can handle the layout version of Hadoop 2.4 and up. (If you want 
 to handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
 * Delimited, Indented, and Ls processor were removed.
 * A new Web processor, which supersedes the Ls processor, was added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2014-04-25 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980880#comment-13980880
 ] 

Vinayakumar B commented on HDFS-4167:
-

Hi,

Its nice to see this feature on board instead of manual restore from the 
snapshot.

I see that design document attached is in different approach than the patch 
attached. It can be confusing .. ;)

Some quick comments on the patch
1. {code}
+   * @param restoreRoot
+   *  The file/dir to restore * @param collectedBlocks blocks collected
+   *  from the descents for further block deletion/update will be added
+   *  to the given map.
{code}
should be properly formatted. collectedBlocks got mixed with restoreRoot's 
description

2. {code}
+DirectoryWithSnapshotFeature sf = getDirectoryWithSnapshotFeature();
+Quota.Counts delta = Quota.Counts.newInstance();
+if (sf != null) {
+  sf.restoreSnapshot(this, restoreRoot, snapshot, collectedBlocks,
+  removedINodes);
+}
+return delta;
{code}
not using the return quota values from {{sf.restoreSnapshot(..)}}


3. FSDirectory.java changes could be ignored from the patch as its only 
whitespace change.

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS Design Proposal.pdf, HDFS-4167.000.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6247) Avoid timeouts for replaceBlock() call by sending intermediate responses to Balancer

2014-04-25 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980895#comment-13980895
 ] 

Vinayakumar B commented on HDFS-6247:
-

test failure is not related to current patch.

 Avoid timeouts for replaceBlock() call by sending intermediate responses to 
 Balancer
 

 Key: HDFS-6247
 URL: https://issues.apache.org/jira/browse/HDFS-6247
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer, datanode
Affects Versions: 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-6247.patch, HDFS-6247.patch


 Currently there is no response sent from target Datanode to Balancer for the 
 replaceBlock() calls.
 Since the Block movement for balancing is throttled, complete block movement 
 will take time and this could result in timeout at Balancer, which will be 
 trying to read the status message.
  
 To Avoid this during replaceBlock() call in in progress Datanode  can send 
 IN_PROGRESS status messages to Balancer to avoid timeouts and treat 
 BlockMovement as  failed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6252) Namenode old webUI should be deprecated

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980899#comment-13980899
 ] 

Hadoop QA commented on HDFS-6252:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641888/HDFS-6252.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 47 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common:

  org.apache.hadoop.hdfs.qjournal.TestNNWithQJM

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6732//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6732//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6732//console

This message is automatically generated.

 Namenode old webUI should be deprecated
 ---

 Key: HDFS-6252
 URL: https://issues.apache.org/jira/browse/HDFS-6252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
 HDFS-6252.002.patch, HDFS-6252.003.patch


 We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
 from download this file on the browseDirectory.jsp, it will throw an error:
 Problem accessing /streamFile/***
 because streamFile servlet was deleted in HDFS-5570.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6282) re-add testIncludeByRegistrationName

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980922#comment-13980922
 ] 

Hudson commented on HDFS-6282:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #551 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/551/])
HDFS-6282. Re-add testIncludeByRegistrationName (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589907)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java


 re-add testIncludeByRegistrationName
 

 Key: HDFS-6282
 URL: https://issues.apache.org/jira/browse/HDFS-6282
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.5.0

 Attachments: HDFS-6282.001.patch


 Re-add a test of using DataNode registration names in an HDFS host include 
 file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6246) Remove 'dfs.support.append' flag from trunk code

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980928#comment-13980928
 ] 

Hudson commented on HDFS-6246:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #551 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/551/])
HDFS-6246. Remove 'dfs.support.append' flag from trunk code. Contributed by Uma 
Maheswara Rao G. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589927)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


 Remove 'dfs.support.append' flag from trunk code
 

 Key: HDFS-6246
 URL: https://issues.apache.org/jira/browse/HDFS-6246
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6246.patch


 we added dfs.support.append' flag long ago to control the issues with append 
 feature.  Now in trunk and hadoop-2, by default we enabled and using it for 
 long time.  
 So, I propose to remove that property now as we don't see any issue by 
 enabling the append feature always. 
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6281) Provide option to use the NFS Gateway without having to use the Hadoop portmapper

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980921#comment-13980921
 ] 

Hudson commented on HDFS-6281:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #551 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/551/])
HDFS-6281. Provide option to use the NFS Gateway without having to use the 
Hadoop portmapper. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589914)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestFrameDecoder.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/Mountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/PrivilegedNfsGatewayStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm


 Provide option to use the NFS Gateway without having to use the Hadoop 
 portmapper
 -

 Key: HDFS-6281
 URL: https://issues.apache.org/jira/browse/HDFS-6281
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.5.0

 Attachments: HDFS-6281.patch, HDFS-6281.patch


 In order to use the NFS Gateway on operating systems with the rpcbind 
 privileged registration bug, we currently require users to shut down and 
 discontinue use of the system-provided portmap daemon, and instead use the 
 portmap daemon provided by Hadoop. Alternately, we can work around this bug 
 if we tweak the NFS Gateway to perform its port registration from a 
 privileged port, and still let users use the system portmap daemon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6210) Support GETACLSTATUS operation in WebImageViewer

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980925#comment-13980925
 ] 

Hudson commented on HDFS-6210:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #551 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/551/])
HDFS-6210. Support GETACLSTATUS operation in WebImageViewer. Contributed by 
Akira Ajisaka. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589933)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java


 Support GETACLSTATUS operation in WebImageViewer
 

 Key: HDFS-6210
 URL: https://issues.apache.org/jira/browse/HDFS-6210
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6210.2.patch, HDFS-6210.3.patch, HDFS-6210.4.patch, 
 HDFS-6210.patch, HDFS-6210.patch


 In HDFS-6170, I found {{GETACLSTATUS}} operation support is also required to 
 execute hdfs dfs -ls to WebImageViewer.
 {code}
 [root@trunk ~]# hdfs dfs -ls webhdfs://localhost:5978/
 14/04/09 11:53:04 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Found 1 items
 ls: Unexpected HTTP response: code=400 != 200, op=GETACLSTATUS, message=Bad 
 Request
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6273) Config options to allow wildcard endpoints for namenode HTTP and HTTPS servers

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980923#comment-13980923
 ] 

Hudson commented on HDFS-6273:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #551 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/551/])
HDFS-6273. Add file missed in previous checkin. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589808)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
HDFS-6273. Config options to allow wildcard endpoints for namenode HTTP and 
HTTPS servers. (Contributed by Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589803)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Config options to allow wildcard endpoints for namenode HTTP and HTTPS servers
 --

 Key: HDFS-6273
 URL: https://issues.apache.org/jira/browse/HDFS-6273
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6273.01.patch, HDFS-6273.02.patch, 
 HDFS-6273.03.patch, HDFS-6273.04.patch


 The NameNode already has a couple of keys to allow the RPC and Service RPC 
 servers to bind the wildcard address (0.0.0.0) which is useful in multihomed 
 environments via:
 # {{dfs.namenode.rpc-bind-host}}
 # {{dfs.namenode.servicerpc-address}}
 This Jira is to add similar options for the HTTP and HTTPS endpoints.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6266) Identify full path for a given INode

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980926#comment-13980926
 ] 

Hudson commented on HDFS-6266:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #551 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/551/])
HDFS-6266. Identify full path for a given INode. Contributed by Jing Zhao. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589920)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


 Identify full path for a given INode
 

 Key: HDFS-6266
 URL: https://issues.apache.org/jira/browse/HDFS-6266
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.5.0

 Attachments: HDFS-6266.000.patch, HDFS-6266.001.patch


 Currently when identifying the full path of a given inode, 
 FSDirectory#getPathComponents and FSDirectory#getFullPathName can only handle 
 normal cases where the inode and its ancestors are not in any snapshot. This 
 jira aims to provide support to handle snapshots. This can be useful for 
 identifying the Rename change in a snapshot diff report.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Support XAttrs from NameNode and implements XAttr APIs for DistributedFileSystem

2014-04-25 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980970#comment-13980970
 ] 

Yi Liu commented on HDFS-6258:
--

Thanks Chris for your suggest, it makes sense, I'm doing this:). 
For XAttr config flag, it is {code}dfs.namenode.xattrs.enabled{code}
Should I merge it with Acl config flag? My thought the configuration property 
is a different one.

 Support XAttrs from NameNode and implements XAttr APIs for 
 DistributedFileSystem
 

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.patch


 This JIRA is to implement extended attributes in HDFS: support XAttrs from 
 NameNode, implements XAttr APIs for DistributedFileSystem and so on.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980981#comment-13980981
 ] 

Hadoop QA commented on HDFS-6133:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641891/HDFS-6133.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDistributedFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6733//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6733//console

This message is automatically generated.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6282) re-add testIncludeByRegistrationName

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980985#comment-13980985
 ] 

Hudson commented on HDFS-6282:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1768 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1768/])
HDFS-6282. Re-add testIncludeByRegistrationName (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589907)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java


 re-add testIncludeByRegistrationName
 

 Key: HDFS-6282
 URL: https://issues.apache.org/jira/browse/HDFS-6282
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.5.0

 Attachments: HDFS-6282.001.patch


 Re-add a test of using DataNode registration names in an HDFS host include 
 file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6246) Remove 'dfs.support.append' flag from trunk code

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980991#comment-13980991
 ] 

Hudson commented on HDFS-6246:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1768 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1768/])
HDFS-6246. Remove 'dfs.support.append' flag from trunk code. Contributed by Uma 
Maheswara Rao G. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589927)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


 Remove 'dfs.support.append' flag from trunk code
 

 Key: HDFS-6246
 URL: https://issues.apache.org/jira/browse/HDFS-6246
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6246.patch


 we added dfs.support.append' flag long ago to control the issues with append 
 feature.  Now in trunk and hadoop-2, by default we enabled and using it for 
 long time.  
 So, I propose to remove that property now as we don't see any issue by 
 enabling the append feature always. 
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6281) Provide option to use the NFS Gateway without having to use the Hadoop portmapper

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980984#comment-13980984
 ] 

Hudson commented on HDFS-6281:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1768 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1768/])
HDFS-6281. Provide option to use the NFS Gateway without having to use the 
Hadoop portmapper. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589914)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestFrameDecoder.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/Mountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/PrivilegedNfsGatewayStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm


 Provide option to use the NFS Gateway without having to use the Hadoop 
 portmapper
 -

 Key: HDFS-6281
 URL: https://issues.apache.org/jira/browse/HDFS-6281
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.5.0

 Attachments: HDFS-6281.patch, HDFS-6281.patch


 In order to use the NFS Gateway on operating systems with the rpcbind 
 privileged registration bug, we currently require users to shut down and 
 discontinue use of the system-provided portmap daemon, and instead use the 
 portmap daemon provided by Hadoop. Alternately, we can work around this bug 
 if we tweak the NFS Gateway to perform its port registration from a 
 privileged port, and still let users use the system portmap daemon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6273) Config options to allow wildcard endpoints for namenode HTTP and HTTPS servers

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980986#comment-13980986
 ] 

Hudson commented on HDFS-6273:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1768 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1768/])
HDFS-6273. Add file missed in previous checkin. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589808)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
HDFS-6273. Config options to allow wildcard endpoints for namenode HTTP and 
HTTPS servers. (Contributed by Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589803)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Config options to allow wildcard endpoints for namenode HTTP and HTTPS servers
 --

 Key: HDFS-6273
 URL: https://issues.apache.org/jira/browse/HDFS-6273
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6273.01.patch, HDFS-6273.02.patch, 
 HDFS-6273.03.patch, HDFS-6273.04.patch


 The NameNode already has a couple of keys to allow the RPC and Service RPC 
 servers to bind the wildcard address (0.0.0.0) which is useful in multihomed 
 environments via:
 # {{dfs.namenode.rpc-bind-host}}
 # {{dfs.namenode.servicerpc-address}}
 This Jira is to add similar options for the HTTP and HTTPS endpoints.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6210) Support GETACLSTATUS operation in WebImageViewer

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980988#comment-13980988
 ] 

Hudson commented on HDFS-6210:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1768 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1768/])
HDFS-6210. Support GETACLSTATUS operation in WebImageViewer. Contributed by 
Akira Ajisaka. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589933)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java


 Support GETACLSTATUS operation in WebImageViewer
 

 Key: HDFS-6210
 URL: https://issues.apache.org/jira/browse/HDFS-6210
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6210.2.patch, HDFS-6210.3.patch, HDFS-6210.4.patch, 
 HDFS-6210.patch, HDFS-6210.patch


 In HDFS-6170, I found {{GETACLSTATUS}} operation support is also required to 
 execute hdfs dfs -ls to WebImageViewer.
 {code}
 [root@trunk ~]# hdfs dfs -ls webhdfs://localhost:5978/
 14/04/09 11:53:04 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Found 1 items
 ls: Unexpected HTTP response: code=400 != 200, op=GETACLSTATUS, message=Bad 
 Request
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6266) Identify full path for a given INode

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980989#comment-13980989
 ] 

Hudson commented on HDFS-6266:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1768 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1768/])
HDFS-6266. Identify full path for a given INode. Contributed by Jing Zhao. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589920)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


 Identify full path for a given INode
 

 Key: HDFS-6266
 URL: https://issues.apache.org/jira/browse/HDFS-6266
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.5.0

 Attachments: HDFS-6266.000.patch, HDFS-6266.001.patch


 Currently when identifying the full path of a given inode, 
 FSDirectory#getPathComponents and FSDirectory#getFullPathName can only handle 
 normal cases where the inode and its ancestors are not in any snapshot. This 
 jira aims to provide support to handle snapshots. This can be useful for 
 identifying the Rename change in a snapshot diff report.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5693) Few NN metrics data points were collected via JMX when NN is under heavy load

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981003#comment-13981003
 ] 

Hudson commented on HDFS-5693:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-5693. Few NN metrics data points were collected via JMX when NN is under 
heavy load. Contributed by Ming Ma. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589620)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java


 Few NN metrics data points were collected via JMX when NN is under heavy load
 -

 Key: HDFS-5693
 URL: https://issues.apache.org/jira/browse/HDFS-5693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.5.0

 Attachments: HADOOP-5693.patch, HDFS-5693-2.patch, HDFS-5693.patch


 JMX sometimes doesn’t return any value when NN is under heavy load. However, 
 that is when we would like to get metrics to help to diagnosis the issue.
 When NN is under heavy load due to bad application or other reasons, it holds 
 FSNamesystem's writer lock for a long period of time. Many of the 
 FSNamesystem metrics require FSNamesystem's reader lock and thus can't be 
 processed.
 This is a special case to improve the overall NN concurrency.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6217) Webhdfs PUT operations may not work via a http proxy

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981011#comment-13981011
 ] 

Hudson commented on HDFS-6217:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6217. Webhdfs PUT operations may not work via a http proxy. Contributed by 
Daryn Sharp. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589528)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsContentLength.java


 Webhdfs PUT operations may not work via a http proxy
 

 Key: HDFS-6217
 URL: https://issues.apache.org/jira/browse/HDFS-6217
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6217.patch


 Most of webhdfs's PUT operations have no message body.  The HTTP/1.1 spec is 
 fuzzy in how PUT requests with no body should be handled.  If the request 
 does not specify chunking or Content-Length, the server _may_ consider the 
 request to have no body.  However, popular proxies such as Apache Traffic 
 Server will reject PUT requests with no body unless Content-Length: 0 is 
 specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6279) Create new index page for JN / DN

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981008#comment-13981008
 ] 

Hudson commented on HDFS-6279:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6279. Create new index page for JN / DN. Contributed by Haohui Mai. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589618)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/index.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html


 Create new index page for JN / DN
 -

 Key: HDFS-6279
 URL: https://issues.apache.org/jira/browse/HDFS-6279
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HDFS-6279.000.patch


 This jira proposes to replace the jsp UI of DN / JN to web pages that match 
 the looks and feels of the NN UI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6282) re-add testIncludeByRegistrationName

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981000#comment-13981000
 ] 

Hudson commented on HDFS-6282:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6282. Re-add testIncludeByRegistrationName (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589907)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java


 re-add testIncludeByRegistrationName
 

 Key: HDFS-6282
 URL: https://issues.apache.org/jira/browse/HDFS-6282
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.5.0

 Attachments: HDFS-6282.001.patch


 Re-add a test of using DataNode registration names in an HDFS host include 
 file.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6266) Identify full path for a given INode

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981020#comment-13981020
 ] 

Hudson commented on HDFS-6266:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6266. Identify full path for a given INode. Contributed by Jing Zhao. 
(jing9: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589920)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


 Identify full path for a given INode
 

 Key: HDFS-6266
 URL: https://issues.apache.org/jira/browse/HDFS-6266
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.5.0

 Attachments: HDFS-6266.000.patch, HDFS-6266.001.patch


 Currently when identifying the full path of a given inode, 
 FSDirectory#getPathComponents and FSDirectory#getFullPathName can only handle 
 normal cases where the inode and its ancestors are not in any snapshot. This 
 jira aims to provide support to handle snapshots. This can be useful for 
 identifying the Rename change in a snapshot diff report.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6210) Support GETACLSTATUS operation in WebImageViewer

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981017#comment-13981017
 ] 

Hudson commented on HDFS-6210:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6210. Support GETACLSTATUS operation in WebImageViewer. Contributed by 
Akira Ajisaka. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589933)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForAcl.java


 Support GETACLSTATUS operation in WebImageViewer
 

 Key: HDFS-6210
 URL: https://issues.apache.org/jira/browse/HDFS-6210
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 2.5.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 2.5.0

 Attachments: HDFS-6210.2.patch, HDFS-6210.3.patch, HDFS-6210.4.patch, 
 HDFS-6210.patch, HDFS-6210.patch


 In HDFS-6170, I found {{GETACLSTATUS}} operation support is also required to 
 execute hdfs dfs -ls to WebImageViewer.
 {code}
 [root@trunk ~]# hdfs dfs -ls webhdfs://localhost:5978/
 14/04/09 11:53:04 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 Found 1 items
 ls: Unexpected HTTP response: code=400 != 200, op=GETACLSTATUS, message=Bad 
 Request
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6273) Config options to allow wildcard endpoints for namenode HTTP and HTTPS servers

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981016#comment-13981016
 ] 

Hudson commented on HDFS-6273:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6273. Add file missed in previous checkin. (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589808)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRespectsBindHostKeys.java
HDFS-6273. Config options to allow wildcard endpoints for namenode HTTP and 
HTTPS servers. (Contributed by Arpit Agarwal) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589803)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Config options to allow wildcard endpoints for namenode HTTP and HTTPS servers
 --

 Key: HDFS-6273
 URL: https://issues.apache.org/jira/browse/HDFS-6273
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6273.01.patch, HDFS-6273.02.patch, 
 HDFS-6273.03.patch, HDFS-6273.04.patch


 The NameNode already has a couple of keys to allow the RPC and Service RPC 
 servers to bind the wildcard address (0.0.0.0) which is useful in multihomed 
 environments via:
 # {{dfs.namenode.rpc-bind-host}}
 # {{dfs.namenode.servicerpc-address}}
 This Jira is to add similar options for the HTTP and HTTPS endpoints.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6278) Create HTML5-based UI for SNN

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981006#comment-13981006
 ] 

Hudson commented on HDFS-6278:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6278. Create HTML5-based UI for SNN. Contributed by Haohui Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589613)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNodeInfoMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/VersionInfoMXBean.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfs-dust.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/index.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/snn.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dfs-dust.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryWebUi.java


 Create HTML5-based UI for SNN
 -

 Key: HDFS-6278
 URL: https://issues.apache.org/jira/browse/HDFS-6278
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HDFS-6278.000.patch


 This jira proposes to create a HTML5-based UI for SNN, which matches the 
 looks and feels of the current NN UI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6274) Cleanup javadoc warnings in HDFS code

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981015#comment-13981015
 ] 

Hudson commented on HDFS-6274:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6274. Cleanup javadoc warnings in HDFS code. Contributed by Suresh 
Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589506)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockStorageLocationUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientDatanodeProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedReplica.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBeingWritten.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipelineInterface.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaWaitingToBeRecovered.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/AvailableSpaceVolumeChoosingPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 

[jira] [Commented] (HDFS-6275) Fix warnings - type arguments can be inferred and redudant local variable

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981019#comment-13981019
 ] 

Hudson commented on HDFS-6275:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6275. Fix warnings - type arguments can be inferred and redudant local 
variable. Contributed by Suresh Srinivas. (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589510)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImagePreTransactionalStorageInspector.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageServlet.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumJournalManagerUnit.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAclTransformation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestTransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/TestOfflineEditsViewer.java


 Fix warnings - type arguments can be inferred and redudant local variable
 -

 Key: HDFS-6275
 URL: https://issues.apache.org/jira/browse/HDFS-6275
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.5.0

 Attachments: HDFS-6275.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6246) Remove 'dfs.support.append' flag from trunk code

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981022#comment-13981022
 ] 

Hudson commented on HDFS-6246:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6246. Remove 'dfs.support.append' flag from trunk code. Contributed by Uma 
Maheswara Rao G. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589927)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java


 Remove 'dfs.support.append' flag from trunk code
 

 Key: HDFS-6246
 URL: https://issues.apache.org/jira/browse/HDFS-6246
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6246.patch


 we added dfs.support.append' flag long ago to control the issues with append 
 feature.  Now in trunk and hadoop-2, by default we enabled and using it for 
 long time.  
 So, I propose to remove that property now as we don't see any issue by 
 enabling the append feature always. 
 Thoughts?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6281) Provide option to use the NFS Gateway without having to use the Hadoop portmapper

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13980999#comment-13980999
 ] 

Hudson commented on HDFS-6281:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6281. Provide option to use the NFS Gateway without having to use the 
Hadoop portmapper. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589914)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/SimpleUdpClient.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/oncrpc/TestFrameDecoder.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/Mountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/PrivilegedNfsGatewayStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsNfsGateway.apt.vm


 Provide option to use the NFS Gateway without having to use the Hadoop 
 portmapper
 -

 Key: HDFS-6281
 URL: https://issues.apache.org/jira/browse/HDFS-6281
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.5.0

 Attachments: HDFS-6281.patch, HDFS-6281.patch


 In order to use the NFS Gateway on operating systems with the rpcbind 
 privileged registration bug, we currently require users to shut down and 
 discontinue use of the system-provided portmap daemon, and instead use the 
 portmap daemon provided by Hadoop. Alternately, we can work around this bug 
 if we tweak the NFS Gateway to perform its port registration from a 
 privileged port, and still let users use the system portmap daemon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6276) Remove unnecessary conditions and null check

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981018#comment-13981018
 ] 

Hudson commented on HDFS-6276:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-6276. Remove unnecessary conditions and null check. Contributed by Suresh 
Srinivas (suresh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589586)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RollingLogsImpl.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachePool.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SaveNamespaceCancelledException.java


 Remove unnecessary conditions and null check
 

 Key: HDFS-6276
 URL: https://issues.apache.org/jira/browse/HDFS-6276
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 2.5.0

 Attachments: HDFS-6276.1.patch, HDFS-6276.patch


 The code has many places where null check and other condition checks are 
 unnecessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5213) separate PathBasedCacheEntry and PathBasedCacheDirectiveWithId

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981023#comment-13981023
 ] 

Hudson commented on HDFS-5213:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1742 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1742/])
HDFS-5213. TestDataNodeConfig failing on Jenkins runs due to DN web port in 
use. (wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1589474)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeConfig.java


 separate PathBasedCacheEntry and PathBasedCacheDirectiveWithId
 --

 Key: HDFS-5213
 URL: https://issues.apache.org/jira/browse/HDFS-5213
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HDFS-4949

 Attachments: HDFS-5213-caching.001.patch, HDFS-5213-caching.003.patch


 Since PathBasedCacheEntry is intended to be a private (implementation) class,
 return PathBasedCacheDirectiveWithId from all public APIs instead.  Some 
 other miscellaneous cleanups in the caching RPC stuff.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2014-04-25 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981055#comment-13981055
 ] 

Daryn Sharp commented on HDFS-6133:
---

[~stack], I quickly skimmed HDFS-2576.  Favored placement appears to dovetail 
very nicely with replica pinning if the goal is to place a 2nd replica on a 
specific node in the event the primary region server fails.  Whether one or 
more replicas is pinned to preserve favored placement, it would appear to meet 
the requirement of this jira, correct?

[~zhaoyunjiong], ignoring the mid-long term goal of separating the BM service, 
the preconditions are simple and a valuable improvement to the NN:  untangle 
the namesystem / directory / BM architecture.  Each layer should have a 
specific role with no back references to other layers.  Namesystem is path 
based, directory is inode based, BM is just a BM.

By allowing no back references, it also opens the possibility of breaking up 
the global locking by  locking subsystems w/o fear of deadlocks.  In and of 
itself, the NN will benefit from a cleaner design and more targeted locking, 
and if done correctly, it's a small leap to run the BM as a separate service 
which in turn opens the door to allowing other services than the NN to utilize 
the block manager.  This is why I'm opposed (-1) to making the BM path aware.  
It ruins the ability to do any of the above.

Do you believe block pinning will address your issue?

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-3122) Block recovery with closeFile flag true can race with blockReport. Due to this blocks are getting marked as corrupt.

2014-04-25 Thread Mit Desai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mit Desai resolved HDFS-3122.
-

  Resolution: Not a Problem
Target Version/s: 0.23.3, 0.24.0  (was: 0.24.0, 0.23.3)

Haven't heard anything yet. Resolving this issue. Feel free to reopen if anyone 
thinks the other way.

 Block recovery with closeFile flag true can race with blockReport. Due to 
 this blocks are getting marked as corrupt.
 

 Key: HDFS-3122
 URL: https://issues.apache.org/jira/browse/HDFS-3122
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Critical
 Attachments: blockCorrupt.txt


 *Block Report* can *race* with *Block Recovery* with closeFile flag true.
  Block report generated just before block recovery at DN side and due to N/W 
 problems, block report got delayed to NN. 
 After this, recovery success and generation stamp modifies to new one. 
 And primary DN invokes the commitBlockSynchronization and block got updated 
 in NN side. Also block got marked as complete, since the closeFile flag was 
 true. Updated with new genstamp.
 Now blockReport started processing at NN side. This particular block from RBW 
 (when it generated the BR at DN), and file was completed at NN side.
 Finally block will be marked as corrupt because of genstamp mismatch.
 {code}
  case RWR:
   if (!storedBlock.isComplete()) {
 return null; // not corrupt
   } else if (storedBlock.getGenerationStamp() != 
 iblk.getGenerationStamp()) {
 return new BlockToMarkCorrupt(storedBlock,
 reported  + reportedState +  replica with genstamp  +
 iblk.getGenerationStamp() +  does not match COMPLETE block's  +
 genstamp in block map  + storedBlock.getGenerationStamp());
   } else { // COMPLETE block, same genstamp
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-25 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-6269:
-

Attachment: HDFS-6269-AuditLogWebOpen.txt

Updated patch to add a new proto= field.

 NameNode Audit Log should differentiate between webHDFS open and HDFS open.
 ---

 Key: HDFS-6269
 URL: https://issues.apache.org/jira/browse/HDFS-6269
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, webhdfs
Affects Versions: 2.4.0
Reporter: Eric Payne
Assignee: Eric Payne
 Attachments: HDFS-6269-AuditLogWebOpen.txt, 
 HDFS-6269-AuditLogWebOpen.txt


 To enhance traceability, the NameNode audit log should use a different string 
 for open in the cmd= part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6254) hdfsConnect segment fault where namenode not connected

2014-04-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981175#comment-13981175
 ] 

Chris Nauroth commented on HDFS-6254:
-

[~huangkx], yes, you're correct.  Neither {{hdfsConnect}} nor 
{{hdfsBuilderConnect}} actually initiates a network connection.  Instead, they 
connect an {{hdfsFS}} struct to an underlying Java {{FileSystem}}.  
Basically, this stuff is all wrappers over {{FileSystem#get}} in the Java 
layer.  There is no actual interaction with the HDFS daemons until you use it 
with a function like {{hdfsCreateDirectory}}.  Our use of the word connect in 
the function names is perhaps slightly misleading.

 hdfsConnect segment fault where namenode not connected
 --

 Key: HDFS-6254
 URL: https://issues.apache.org/jira/browse/HDFS-6254
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.2.0
 Environment: Linux Centos 64bit
Reporter: huang ken
Assignee: Chris Nauroth

 When namenode is not started, the libhdfs client will cause segment fault 
 while connecting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6270) Secondary namenode status page shows transaction count in bytes

2014-04-25 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-6270:
---

Attachment: HDFS-6270.patch

Thanks for the pointer, [~wheat9]. 
Rebased the patch per latest changes.
Made similar changes to status.html also.

 Secondary namenode status page shows transaction count in bytes
 ---

 Key: HDFS-6270
 URL: https://issues.apache.org/jira/browse/HDFS-6270
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HDFS-6270.patch, HDFS-6270.patch, HDFS-6270.patch


 Though checkpoint trigger is changed from edit log size to transaction count 
 , the SN UI still shows the limit in terms of bytes
 It appears as :
 Checkpoint Period: 3600 seconds
 Checkpoint Size  : 976.56 KB (= 100 bytes)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6262) HDFS doesn't raise FileNotFoundException if the source of a rename() is missing

2014-04-25 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981258#comment-13981258
 ] 

Suresh Srinivas commented on HDFS-6262:
---

bq.  But: every other filesystem considers renaming a file that doesn't exist 
to be an error.
I agree. However, some of our methods have two ways to indicate failure - 
return false or thrown an exception and there lies the problem.

Given that applications must handle exception being thrown from these methods, 
changing the behavior for HDFS should be okay. But we do not how all the apps 
use this API and I suspect we will break some applications. Especially case 2 
you pointed in your comments. One thing I was thinking of was to possibly have 
a hidden configuration to revert back to old behavior in HDFS. But that is 
pretty ugly.

Again, I feel we should leave the method as is in HDFS. But I am okay if you 
want to go ahead and make this change. We should perhaps document it and hope 
that not too many applications break.


 HDFS doesn't raise FileNotFoundException if the source of a rename() is 
 missing
 ---

 Key: HDFS-6262
 URL: https://issues.apache.org/jira/browse/HDFS-6262
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
 Attachments: HDFS-6262.2.patch, HDFS-6262.patch


 HDFS's {{rename(src, dest)}} returns false if src does not exist -all the 
 other filesystems raise {{FileNotFoundException}}
 This behaviour is defined in {{FSDirectory.unprotectedRenameTo()}} -the 
 attempt is logged, but the operation then just returns false.
 I propose changing the behaviour of {{DistributedFileSystem}} to be the same 
 as that of the others -and of {{FileContext}}, which does reject renames with 
 nonexistent sources



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2014-04-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981279#comment-13981279
 ] 

stack commented on HDFS-6133:
-

[~daryn] Pinning would work as long as we are still able to dictate where to 
place the replicas.  If pinning a block makes it immune to balance, that should 
work.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Support XAttrs from NameNode and implements XAttr APIs for DistributedFileSystem

2014-04-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981290#comment-13981290
 ] 

Andrew Wang commented on HDFS-6258:
---

I think Chris meant we could do some code sharing, maybe with a base class for 
both flags.

 Support XAttrs from NameNode and implements XAttr APIs for 
 DistributedFileSystem
 

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.patch


 This JIRA is to implement extended attributes in HDFS: support XAttrs from 
 NameNode, implements XAttr APIs for DistributedFileSystem and so on.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2014-04-25 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981298#comment-13981298
 ] 

Jing Zhao commented on HDFS-4167:
-

Thanks for the comments, [~vinayrpet]! I will update the patch to address your 
comments. Will also add new unit tests and fix remaining bugs.

bq. I see that design document attached is in different approach than the patch 
attached. It can be confusing ..
[~rguo], since the design doc is not tightly related to this jira, and has 
already been posted in HDFS-6087, I remove it from this jira first. Please feel 
free to create separate jiras for new design of snapshots.

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Support XAttrs from NameNode and implements XAttr APIs for DistributedFileSystem

2014-04-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981300#comment-13981300
 ] 

Chris Nauroth commented on HDFS-6258:
-

Yes, what Andrew said is what I meant too.  Right now, the code of 
{{XAttrConfigFlag}} looks identical to {{AclConfigFlag}}, except for the 
specific config property and error message strings.  One idea would be to 
consolidate this into a single {{ConfigFlag}} class that accepts the different 
properties and error message strings as constructor arguments.

 Support XAttrs from NameNode and implements XAttr APIs for 
 DistributedFileSystem
 

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.patch


 This JIRA is to implement extended attributes in HDFS: support XAttrs from 
 NameNode, implements XAttr APIs for DistributedFileSystem and so on.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4167) Add support for restoring/rolling back to a snapshot

2014-04-25 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4167:


Attachment: (was: HDFS Design Proposal.pdf)

 Add support for restoring/rolling back to a snapshot
 

 Key: HDFS-4167
 URL: https://issues.apache.org/jira/browse/HDFS-4167
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Suresh Srinivas
Assignee: Jing Zhao
 Attachments: HDFS-4167.000.patch


 This jira tracks work related to restoring a directory/file to a snapshot.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases

2014-04-25 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HDFS-6255:
---

Assignee: Chris Nauroth

 fuse_dfs will not adhere to ACL permissions in some cases
 -

 Key: HDFS-6255
 URL: https://issues.apache.org/jira/browse/HDFS-6255
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: Chris Nauroth

 As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. 
 Then I set a new acl group:jenkins:rwx on /tmp/acl_dir.
 {code}
 jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir
 # file: /tmp/acl_dir
 # owner: hdfs
 # group: supergroup
 user::rwx
 group::---
 group:jenkins:rwx
 mask::rwx
 other::---
 {code}
 Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create 
 a file and directory inside.
 {code}
 [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1
 [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1
 hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/
 Found 2 items
 drwxr-xr-x   - jenkins supergroup  0 2014-04-17 19:11 
 /tmp/acl_dir/testdir1
 -rw-r--r--   1 jenkins supergroup  0 2014-04-17 19:11 
 /tmp/acl_dir/testfile1
 [jenkins@hdfs-vanilla-1 ~]$ 
 {code}
 However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a 
 fuse_dfs mount, I get permission denied. Same permission denied when I try to 
 create or list files.
 {code}
 [jenkins@hdfs-vanilla-1 tmp]$ ls -l
 total 16
 drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir
 drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2
 drwxr-xr-x 3 mapred  nobody 4096 Mar 11 03:53 mapred
 drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli
 -rwx-- 1 hdfsnobody0 Apr  7 17:18 tf1
 [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir
 bash: cd: acl_dir: Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2
 touch: cannot touch `acl_dir/testfile2': Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2
 mkdir: cannot create directory `acl_dir/testdir2': Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ 
 {code}
 The fuse_dfs debug output doesn't show any error for the above operations:
 {code}
 unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48
unique: 18, success, outsize: 32
 unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80
 readdir[0] from 0
unique: 19, success, outsize: 312
 unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56
 getattr /tmp
unique: 20, success, outsize: 120
 unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80
unique: 21, success, outsize: 16
 unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
unique: 22, success, outsize: 16
 unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56
 getattr /tmp
unique: 23, success, outsize: 120
 unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 24, success, outsize: 120
 unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 25, success, outsize: 120
 unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 26, success, outsize: 120
 unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 27, success, outsize: 120
 unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 28, success, outsize: 120
 {code}
 In other scenarios, ACL permissions are enforced successfully. For example, 
 as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set 
 the acl user:jenkins:--- on the directory. On the fuse mount, I am not able 
 to ls, mkdir, or touch to that directory as jenkins user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Support XAttrs from NameNode and implements XAttr APIs for DistributedFileSystem

2014-04-25 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981301#comment-13981301
 ] 

Uma Maheswara Rao G commented on HDFS-6258:
---

I think the patch is straightforward to review as a whole but I am not against 
to split into small JIRAs if needed. :-)
Much of the code goes into PB related as well. so, separating that protocol 
things with core logics would be good thing to do for more focus on core logic.
But I don't see a strong reason for splitting persistence and implementations 
here, as we have just 3 APIs implemented to support and much of the code 
already simplified as we just as INodeFature.

BTW, Here is my early review comments on the patch attached. 
FsNameSystem and ClientProtocol:
- Can we keep one getXattr at ClientPRotocol level instead of having more 
overloaded APIs?
  When we validate the xattrs parameter for null and empty, we retun from 
clinet side itself. Need not pass such calls to server and validate.
  This will help for reducing the one overloaded API because if the xattrs 
parameter is null we treat the API like getXattrs(path)

XAttr.java :
- {noformat}
/**
 * XAttr is POSIX Extended Attribute model, similar to the one in traditional 
Operating Systems.
 * Extended Attribute consists of a name and associated data, and 4 namespaces 
are defined: user, 
 * trusted, security and system.
 *   1). USER namespace extended attribute may be assigned for storing 
arbitrary additional 
 *   information, and its access permissions are defined by file/directory 
permission bits.
 *   2). TRUSTED namespace extended attribute are visible and accessible only 
to privilege user 
 *   (file/directory owner or fs admin), and it is available from both user 
space (filesystem 
 *   API) and fs kernel.
 *   3). SYSTEM namespace extended attribute is used by fs kernel to store 
system objects, 
 *   and only available in fs kernel. It's not visible to users.
 *   4). SECURITY namespace extended attribute is used by fs kernel for 
security features, and 
 *   it's not visible to users.
 * p/
 * @see a 
href=http://en.wikipedia.org/wiki/Extended_file_attributes;http://en.wikipedia.org/wiki/Extended_file_attributes/a
 *
 */
 {noformat}
 Please use appropriate line breaks in the java doc. 

 And I don't think . is necessary after each point number

 - {code}
 if (name == null) {
+  if (other.name != null) {
+return false;
+  }
+} else if (!name.equals(other.name)) {
+  return false;
+}
{code}
Commons StringUtils does the null safe comparision. SO, can we use


DFSClient.java:
{code}
if (prefixIndex == -1) {
+  throw new IllegalArgumentException(XAttr name must be prefixed with 
user/trusted/security/system which followed by '.');
+}
{code}
Good to use HadoopIllegalArgumentException

{code}
else {
+  throw new IllegalArgumentException(XAttr name must be prefixed with 
user/trusted/security/system which followed by '.');
+}
{code}
Same as above.

- Seems like we allow empty names ex: user. ?

- {code}
 xAttrMap.put(name, xAttr.getValue());
 {code}
 Seems like we are allowing null xattr values? I did not see a validation for 
not having it. If so, while getting we may hit null pointer here I think.

 XAttrStorage.java:

 - Why can't we validate this much earlier. Why do we need to carry this 
validation parameter till the storage?
 - I would like to see the Xattr storage format on the java doc of XattrStorage 
class.



General:
- I don't see any failure audit logs
- I think we may need to add the layout version in NamenodeLayoutVersion as the 
aplit happend as part of RollingUpgrades I guess. Even I see ACL version number 
tracked there in NamenodeLayoutversion as well. Please check this.


XAttrConfigFlag.java:
I don't have strong feeliong to have a separate class for config parameter and 
for a small check.

 Tests:

 - Can the tests in TestXAttr and TestXAttrs into one class?
 - I don't see any javadoc for tests
 
Some key cases to cover
 - I don't see NN restart cases. Check after restart wether the NN is able to 
get the xattrs which was set before restart.
 - Please include some test cases for HA failover and getxattrs from other node
 - See if they are loosing after a checkpoint etc in HA
 And I did not see a test for validating the behaviour that no flags API is 
behaving same as multi-flagged API here. But no force on this as the API is 
just delegation with multiflags.

 Yes, I agree with others that we should have a separate JIRA for covering XML 
based test cases for command line usage. Much of the cli cases can cover there 
in tests.
 I remember Liu also has a thought on filing JIRA for it once the core part is 
in.
 If you want you can take out hdfs.java into separate jira  and you can add 
tests for supporting this via FileContext APIs.


 Support XAttrs from NameNode and implements XAttr APIs for 
 

[jira] [Commented] (HDFS-6258) Support XAttrs from NameNode and implements XAttr APIs for DistributedFileSystem

2014-04-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981309#comment-13981309
 ] 

Chris Nauroth commented on HDFS-6258:
-

I have one more thing to add to my list of requested test cases.  Please update 
{{TestSafeMode}} to verify that attempts to set xattrs during safe mode get 
rejected.  This is important for confirming that {{FSNamesystem}} has the 
proper calls to {{checkNameNodeSafeMode}}.

 Support XAttrs from NameNode and implements XAttr APIs for 
 DistributedFileSystem
 

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.patch


 This JIRA is to implement extended attributes in HDFS: support XAttrs from 
 NameNode, implements XAttr APIs for DistributedFileSystem and so on.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981335#comment-13981335
 ] 

Hadoop QA commented on HDFS-6269:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12641931/HDFS-6269-AuditLogWebOpen.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestFsck

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6734//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6734//console

This message is automatically generated.

 NameNode Audit Log should differentiate between webHDFS open and HDFS open.
 ---

 Key: HDFS-6269
 URL: https://issues.apache.org/jira/browse/HDFS-6269
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, webhdfs
Affects Versions: 2.4.0
Reporter: Eric Payne
Assignee: Eric Payne
 Attachments: HDFS-6269-AuditLogWebOpen.txt, 
 HDFS-6269-AuditLogWebOpen.txt


 To enhance traceability, the NameNode audit log should use a different string 
 for open in the cmd= part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Support XAttrs from NameNode and implements XAttr APIs for DistributedFileSystem

2014-04-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981323#comment-13981323
 ] 

Chris Nauroth commented on HDFS-6258:
-

On the topic of splitting the patch, I don't feel strongly that it needs to be 
done exactly the way I suggested.  I would prefer to see some kind of split 
though.  I always think I can give a more focused review when a patch is no 
larger than about 50k, just as a rough guideline.

Note also that a lot of the feedback so far relates to adding more tests, so 
overall this is likely to grow larger and larger as the feedback gets 
incorporated.

 Support XAttrs from NameNode and implements XAttr APIs for 
 DistributedFileSystem
 

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.patch


 This JIRA is to implement extended attributes in HDFS: support XAttrs from 
 NameNode, implements XAttr APIs for DistributedFileSystem and so on.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-04-25 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-6287:
--

 Summary: Add vecsum test of libhdfs read access times
 Key: HDFS-6287
 URL: https://issues.apache.org/jira/browse/HDFS-6287
 Project: Hadoop HDFS
  Issue Type: Test
  Components: libhdfs, test
Affects Versions: 2.5.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Add vecsum, a benchmark that tests libhdfs access times.  This includes 
short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6252) Namenode old webUI should be deprecated

2014-04-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6252:
-

Attachment: HDFS-6252.004.patch

 Namenode old webUI should be deprecated
 ---

 Key: HDFS-6252
 URL: https://issues.apache.org/jira/browse/HDFS-6252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
 HDFS-6252.002.patch, HDFS-6252.003.patch, HDFS-6252.004.patch


 We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
 from download this file on the browseDirectory.jsp, it will throw an error:
 Problem accessing /streamFile/***
 because streamFile servlet was deleted in HDFS-5570.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5865) Update OfflineImageViewer document

2014-04-25 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981418#comment-13981418
 ] 

Haohui Mai commented on HDFS-5865:
--

+1

 Update OfflineImageViewer document
 --

 Key: HDFS-5865
 URL: https://issues.apache.org/jira/browse/HDFS-5865
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HDFS-5865.2.patch, HDFS-5865.patch


 OfflineImageViewer is renewed to handle the new format of fsimage by 
 HDFS-5698 (fsimage in protobuf).
 We should document the followings:
 * The tool can handle the layout version of Hadoop 2.4 and up. (If you want 
 to handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
 * Delimited, Indented, and Ls processor were removed.
 * A new Web processor, which supersedes the Ls processor, was added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5865) Update OfflineImageViewer document

2014-04-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5865:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~ajisakaa] for the 
contribution.

 Update OfflineImageViewer document
 --

 Key: HDFS-5865
 URL: https://issues.apache.org/jira/browse/HDFS-5865
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 2.5.0

 Attachments: HDFS-5865.2.patch, HDFS-5865.patch


 OfflineImageViewer is renewed to handle the new format of fsimage by 
 HDFS-5698 (fsimage in protobuf).
 We should document the followings:
 * The tool can handle the layout version of Hadoop 2.4 and up. (If you want 
 to handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
 * Delimited, Indented, and Ls processor were removed.
 * A new Web processor, which supersedes the Ls processor, was added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5865) Update OfflineImageViewer document

2014-04-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981454#comment-13981454
 ] 

Hudson commented on HDFS-5865:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5574 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5574/])
HDFS-5865. Update OfflineImageViewer document. Contributed by Akira Ajisaka. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1590100)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/HdfsImageViewer.apt.vm


 Update OfflineImageViewer document
 --

 Key: HDFS-5865
 URL: https://issues.apache.org/jira/browse/HDFS-5865
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Fix For: 2.5.0

 Attachments: HDFS-5865.2.patch, HDFS-5865.patch


 OfflineImageViewer is renewed to handle the new format of fsimage by 
 HDFS-5698 (fsimage in protobuf).
 We should document the followings:
 * The tool can handle the layout version of Hadoop 2.4 and up. (If you want 
 to handle the older version, you can use OfflineImageViewer of Hadoop 2.3)
 * Delimited, Indented, and Ls processor were removed.
 * A new Web processor, which supersedes the Ls processor, was added.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-04-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6287:
---

Status: Patch Available  (was: Open)

 Add vecsum test of libhdfs read access times
 

 Key: HDFS-6287
 URL: https://issues.apache.org/jira/browse/HDFS-6287
 Project: Hadoop HDFS
  Issue Type: Test
  Components: libhdfs, test
Affects Versions: 2.5.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6282.001.patch


 Add vecsum, a benchmark that tests libhdfs access times.  This includes 
 short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
 local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-04-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6287:
---

Attachment: HDFS-6282.001.patch

 Add vecsum test of libhdfs read access times
 

 Key: HDFS-6287
 URL: https://issues.apache.org/jira/browse/HDFS-6287
 Project: Hadoop HDFS
  Issue Type: Test
  Components: libhdfs, test
Affects Versions: 2.5.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6282.001.patch


 Add vecsum, a benchmark that tests libhdfs access times.  This includes 
 short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
 local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6270) Secondary namenode status page shows transaction count in bytes

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981476#comment-13981476
 ] 

Hadoop QA commented on HDFS-6270:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641946/HDFS-6270.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6735//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6735//console

This message is automatically generated.

 Secondary namenode status page shows transaction count in bytes
 ---

 Key: HDFS-6270
 URL: https://issues.apache.org/jira/browse/HDFS-6270
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HDFS-6270.patch, HDFS-6270.patch, HDFS-6270.patch


 Though checkpoint trigger is changed from edit log size to transaction count 
 , the SN UI still shows the limit in terms of bytes
 It appears as :
 Checkpoint Period: 3600 seconds
 Checkpoint Size  : 976.56 KB (= 100 bytes)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981477#comment-13981477
 ] 

Hadoop QA commented on HDFS-6287:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641990/HDFS-6282.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6737//console

This message is automatically generated.

 Add vecsum test of libhdfs read access times
 

 Key: HDFS-6287
 URL: https://issues.apache.org/jira/browse/HDFS-6287
 Project: Hadoop HDFS
  Issue Type: Test
  Components: libhdfs, test
Affects Versions: 2.5.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6282.001.patch


 Add vecsum, a benchmark that tests libhdfs access times.  This includes 
 short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
 local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6165) hdfs dfs -rm -r and hdfs -rmdir commands can't remove empty directory

2014-04-25 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981505#comment-13981505
 ] 

Yongjun Zhang commented on HDFS-6165:
-

HI Guys, 

Thanks a lot for the review/comments you did, and sorry again for getting back 
late. 

Instead of posting another patch to address all comments, I'd like to summary 
the solution here first.  Sorry for a long posting again. Would you please help 
review the proposed solution below and see if you agree? I will address other 
comments when putting together a new revision of patch. Many thanks.

There are two commands that have problem here

1. hdfs dfs -rmdir (refer to as rmdir in later discussion)
2. hdfs dfs -rm -r (refer to as rmr in later discussion)

Both commands eventually will call FSnamesystem#deleteInternal method
{code}
  private boolean deleteInternal(String src, boolean recursive,
  boolean enforcePermission, boolean logRetryCache)
  throws AccessControlException, SafeModeException, UnresolvedLinkException,
 IOException {
{code}

The deleteInternal methods throws exception if recursive is not true and the 
src to be deleted is not empty; otherwise, it will check necessary 
permissions and collect all blocks/inodes to be deleted and delete them 
recursively. The deletion process excludes snapshottable dirs that has at least 
one snapshot.

Right now it requires FULL permission of the subdirs or files under the target 
dir to be deleted. This permission checking is also recursive, it requires all 
child has FULL permission), This is the place we try to fix for different 
scenarios.

rmr calls with the recursive parameter passed as true,  and rmdir calls with 
the recursive parameter with false.

Solution summary:

1. rmdir

The recursion issue in the comments you guys made is only relevant to rmr.  So 
the solution for rmdir is a simple:

- for nonempty directory, deleteInternal simply throws nonemptydir exception if 
it's nonempty, and the FsShell side catch the exception
- for empty directory, only check parent/prefix permission, ignore the target 
dir's permission (posix compliant), delete if the permission is satisfied, 
throw exception otherwise.

2. rmr

The last patch (version 004) I posted only checks whether the target dir to be 
deleted has READ permission (the earlier versions ignore the target dir's 
permission when it's empty), and I didn't change the behaviour to check subdir 
for non-empty target dir. For non-empty target dir, the current implementation  
requires FULL permission of subdir in order to delete a subdir even if the 
subdir is empty. This is not quite right as [~andrew.wang] pointed out.

I'd like to try to implement what Andrew suggested with an additional/different 
parameter than subAccess of FSPermissionChecker#checkSubAccess
E.g. add emptyDirSubAccess
{code}
void checkPermission(String path, INodeDirectory root, boolean doCheckOwner,
  FsAction ancestorAccess, FsAction parentAccess, FsAction access,
  FsAction subAccess, FsAction emptyDirSubAccess, boolean resolveLink)
{code}
The subAccess will passed FsAction.ALL (as is currently) and emptyDirSubAccess 
will be passed FsAction.NONE.

And if a subdir is not empty, check against subAccess, if it's empty, check 
against emptyDirSubAccess.

About using FsAction.ALL for subAccess parameter, it's a bit over stringent for 
intemediate path, say, if we want to delete targetDir/a/b/c, we don't have to 
have WRITE permission of targetDir/a, but we need to have WRITE permission of 
targetDir/a/b. We might address this issue in a separate JIRA if you agree.

Hi [~daryn], the following is actually not true to me
{quote}
I bet for the described problem, you could delete the no-perms dir if you 
created a new directory, moved the no-perms dir into it, and then recursively 
deleted the new directory.
{quote}
Because the current implementation recursively requires FULL permission of 
subdirs/files to delete them.
If we suddenly change the implementation to allow deleting non-empty dir 
without checking the subdir/file permission, I'm worried about bad user impact. 

Thanks again for your time.


 hdfs dfs -rm -r and hdfs -rmdir commands can't remove empty directory 
 --

 Key: HDFS-6165
 URL: https://issues.apache.org/jira/browse/HDFS-6165
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.3.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
Priority: Minor
 Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
 HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch


 Given a directory owned by user A with WRITE permission containing an empty 
 directory owned by user B, it is not possible to delete user B's empty 
 directory 

[jira] [Updated] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-04-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6287:
---

Attachment: HDFS-6287.002.patch

Looks like the SSE intrinsics could not be found.  I'm going to try again with 
#include emmintrin.h.  If this doesn't work, I guess we'll have to make it 
auto-detect whether it can use SSE, or provide a compile option.

 Add vecsum test of libhdfs read access times
 

 Key: HDFS-6287
 URL: https://issues.apache.org/jira/browse/HDFS-6287
 Project: Hadoop HDFS
  Issue Type: Test
  Components: libhdfs, test
Affects Versions: 2.5.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6282.001.patch, HDFS-6287.002.patch


 Add vecsum, a benchmark that tests libhdfs access times.  This includes 
 short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
 local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6110) adding more slow action log in critical write path

2014-04-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-6110:


Attachment: HDFS-6110v6.txt

[~xieliang007] 's latest patch adding in offline review feedback I got from our 
Todd (See below): i.e. having one threshold for dfsclient (a higher one so 
folks MR'ing don't get annoyed by all the WARNings about slow i/o), and then 
another for datanode side which is much lower so we can see bad i/os.

{code}
16:38  todd stack: just looked at 6110. had one more thought after commenting 
on the JIRA
16:38  todd you think we should add a separate config for client vs server?
16:38  todd I'm afraid that the 300ms default may be a little aggressive for 
the client - people using hadoop fs -put to upload files may get kind of 
nervous the next time they upgrade if they start
  seeing warnings
16:38  todd MR jobs too
16:39  todd may be better to have the client default be 10sec or something 
really long, and then HBase could tune it down for WAL files
16:39  stack todd: thanks boss
16:39  todd you think i'm crazy?
16:39  stack no
16:39  stack Testing it, it is illuminating to see how long stuff takes
16:39  todd k. yea
16:39  todd I had a patch like that once on the server side
16:39  stack Was worried though that it'd freak folks out.
16:40  stack Or, rather, they'd ignore what is being said and just consider 
it 'noise'.
16:40  todd yea
16:40  todd for a throughput app it is kind of noise
16:40  todd but hbase could definitely tune the default inside the RS down
16:40  stack Let me do as you suggest.
16:40  todd k
16:40  stack Thanks for review.
16:40  todd feel free to paste this convo into the jira so it makes sense :)
16:40  todd didn't want to post yet another comment and pollute everyone's 
mailboxes
16:41  * stack nod
{code}

 adding more slow action log in critical write path
 --

 Key: HDFS-6110
 URL: https://issues.apache.org/jira/browse/HDFS-6110
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.0.0, 2.3.0
Reporter: Liang Xie
Assignee: Liang Xie
 Attachments: HDFS-6110-v2.txt, HDFS-6110.txt, HDFS-6110v3.txt, 
 HDFS-6110v4.txt, HDFS-6110v5.txt, HDFS-6110v6.txt


 After digging a HBase write spike issue caused by slow buffer io in our 
 cluster, just realize we'd better to add more abnormal latency warning log in 
 write flow, such that if other guys hit HLog sync spike, we could know more 
 detail info from HDFS side at the same time.
 Patch will be uploaded soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981546#comment-13981546
 ] 

Hadoop QA commented on HDFS-6287:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642003/HDFS-6287.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6738//console

This message is automatically generated.

 Add vecsum test of libhdfs read access times
 

 Key: HDFS-6287
 URL: https://issues.apache.org/jira/browse/HDFS-6287
 Project: Hadoop HDFS
  Issue Type: Test
  Components: libhdfs, test
Affects Versions: 2.5.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6282.001.patch, HDFS-6287.002.patch


 Add vecsum, a benchmark that tests libhdfs access times.  This includes 
 short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
 local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6269) NameNode Audit Log should differentiate between webHDFS open and HDFS open.

2014-04-25 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated HDFS-6269:
-

Attachment: HDFS-6269-AuditLogWebOpen.txt

Sorry about that. I updated the patch to fix the unit test error.

 NameNode Audit Log should differentiate between webHDFS open and HDFS open.
 ---

 Key: HDFS-6269
 URL: https://issues.apache.org/jira/browse/HDFS-6269
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, webhdfs
Affects Versions: 2.4.0
Reporter: Eric Payne
Assignee: Eric Payne
 Attachments: HDFS-6269-AuditLogWebOpen.txt, 
 HDFS-6269-AuditLogWebOpen.txt, HDFS-6269-AuditLogWebOpen.txt


 To enhance traceability, the NameNode audit log should use a different string 
 for open in the cmd= part of the audit entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5851) Support memory as a storage medium

2014-04-25 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HDFS-5851:
---

Attachment: SupportingMemoryStorageinHDFSPersistentandDiscardableMemory.pdf

Added comparison to Tachyon in the doc. The is also an implementation 
difference that I don't cover (Tachyon I believe uses RamFs rather than a 
memory that is mapped to a HDFS file -- but need to verify that).

 Support memory as a storage medium
 --

 Key: HDFS-5851
 URL: https://issues.apache.org/jira/browse/HDFS-5851
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: 
 SupportingMemoryStorageinHDFSPersistentandDiscardableMemory.pdf, 
 SupportingMemoryStorageinHDFSPersistentandDiscardableMemory.pdf


 Memory can be used as a storage medium for smaller/transient files for fast 
 write throughput.
 More information/design will be added later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6288) DFSInputStream Pread doesn't update ReadStatistics

2014-04-25 Thread Juan Yu (JIRA)
Juan Yu created HDFS-6288:
-

 Summary: DFSInputStream Pread doesn't update ReadStatistics
 Key: HDFS-6288
 URL: https://issues.apache.org/jira/browse/HDFS-6288
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor


DFSInputStream Pread doesn't update ReadStatistics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-5851) Support memory as a storage medium

2014-04-25 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981608#comment-13981608
 ] 

Sanjay Radia edited comment on HDFS-5851 at 4/25/14 9:10 PM:
-

Added comparison to Tachyon in the doc. The is also an implementation 
difference that I don't cover (Tachyon I believe uses RamFs rather than a 
memory that is mapped to a HDFS file -- but need to verify that).

I have reproduced the text from the updated doc here for convenience:
Recently, Spark has added an RDD implementation called Tachyon [4]. Tachyon is 
outside the address space of an application and allows sharing RDDs across 
applications. Both Tachyon and DDMs use memory mapped files and lazy writing to 
reduce the need to recompute. Tachyon, since it is an RDD implementation, 
records the computation in order to regenerate the data in case of loss whereas 
DDMs relies on the application to regenerate. Tachyon and RDDs do not have a 
notion of discardability, which is fundamental to DDMs where data can be 
discarded when it is under memory and/or backing store pressure.



was (Author: sanjay.radia):
Added comparison to Tachyon in the doc. The is also an implementation 
difference that I don't cover (Tachyon I believe uses RamFs rather than a 
memory that is mapped to a HDFS file -- but need to verify that).

 Support memory as a storage medium
 --

 Key: HDFS-5851
 URL: https://issues.apache.org/jira/browse/HDFS-5851
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Attachments: 
 SupportingMemoryStorageinHDFSPersistentandDiscardableMemory.pdf, 
 SupportingMemoryStorageinHDFSPersistentandDiscardableMemory.pdf


 Memory can be used as a storage medium for smaller/transient files for fast 
 write throughput.
 More information/design will be added later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6288) DFSInputStream Pread doesn't update ReadStatistics

2014-04-25 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HDFS-6288:
--

Status: Patch Available  (was: Open)

 DFSInputStream Pread doesn't update ReadStatistics
 --

 Key: HDFS-6288
 URL: https://issues.apache.org/jira/browse/HDFS-6288
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HDFS-6288.1.patch


 DFSInputStream Pread doesn't update ReadStatistics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6288) DFSInputStream Pread doesn't update ReadStatistics

2014-04-25 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HDFS-6288:
--

Attachment: HDFS-6288.1.patch

Here is the patch to update ReadStatistics for Pread and also include unit test 
for it.

 DFSInputStream Pread doesn't update ReadStatistics
 --

 Key: HDFS-6288
 URL: https://issues.apache.org/jira/browse/HDFS-6288
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HDFS-6288.1.patch


 DFSInputStream Pread doesn't update ReadStatistics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6288) DFSInputStream Pread doesn't update ReadStatistics

2014-04-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981622#comment-13981622
 ] 

Andrew Wang commented on HDFS-6288:
---

Thanks Juan for the patch, looks good to me. +1 pending Jenkins.

 DFSInputStream Pread doesn't update ReadStatistics
 --

 Key: HDFS-6288
 URL: https://issues.apache.org/jira/browse/HDFS-6288
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HDFS-6288.1.patch


 DFSInputStream Pread doesn't update ReadStatistics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6252) Namenode old webUI should be deprecated

2014-04-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13981636#comment-13981636
 ] 

Hadoop QA commented on HDFS-6252:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12641979/HDFS-6252.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 47 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 41 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6736//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6736//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6736//console

This message is automatically generated.

 Namenode old webUI should be deprecated
 ---

 Key: HDFS-6252
 URL: https://issues.apache.org/jira/browse/HDFS-6252
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
 HDFS-6252.002.patch, HDFS-6252.003.patch, HDFS-6252.004.patch


 We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
 from download this file on the browseDirectory.jsp, it will throw an error:
 Problem accessing /streamFile/***
 because streamFile servlet was deleted in HDFS-5570.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-04-25 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HDFS-6289:


 Summary: HA failover can fail if there are pending DN messages for 
DNs which no longer exist
 Key: HDFS-6289
 URL: https://issues.apache.org/jira/browse/HDFS-6289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Critical


In an HA setup, the standby NN may receive messages from DNs for blocks which 
the standby NN is not yet aware of. It queues up these messages and replays 
them when it next reads from the edit log or fails over. On a failover, all of 
these pending DN messages must be processed successfully in order for the 
failover to succeed. If one of these pending DN messages refers to a DN 
storageId that no longer exists (because the DN with that transfer address has 
been reformatted and has re-registered with the same transfer address) then on 
transition to active the NN will not be able to process this DN message and 
will suicide with an error like the following:

{noformat}
2014-04-25 14:23:17,922 FATAL namenode.NameNode 
(NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
shutdown. Shutting down immediately.
java.io.IOException: Cannot mark blk_1073741825_900(stored=blk_1073741825_1001) 
as corrupt because datanode 127.0.0.1:33324 does not exist
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-04-25 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-6289:
-

Status: Patch Available  (was: Open)

 HA failover can fail if there are pending DN messages for DNs which no longer 
 exist
 ---

 Key: HDFS-6289
 URL: https://issues.apache.org/jira/browse/HDFS-6289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Critical
 Attachments: HDFS-6289.patch


 In an HA setup, the standby NN may receive messages from DNs for blocks which 
 the standby NN is not yet aware of. It queues up these messages and replays 
 them when it next reads from the edit log or fails over. On a failover, all 
 of these pending DN messages must be processed successfully in order for the 
 failover to succeed. If one of these pending DN messages refers to a DN 
 storageId that no longer exists (because the DN with that transfer address 
 has been reformatted and has re-registered with the same transfer address) 
 then on transition to active the NN will not be able to process this DN 
 message and will suicide with an error like the following:
 {noformat}
 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
 (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
 shutdown. Shutting down immediately.
 java.io.IOException: Cannot mark 
 blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
 127.0.0.1:33324 does not exist
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-04-25 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-6289:
-

Attachment: HDFS-6289.patch

Patch attached which addresses this issue by clearing the pending DN message 
queue of any messages which mention a DN which is being removed from the DN map.

 HA failover can fail if there are pending DN messages for DNs which no longer 
 exist
 ---

 Key: HDFS-6289
 URL: https://issues.apache.org/jira/browse/HDFS-6289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Critical
 Attachments: HDFS-6289.patch


 In an HA setup, the standby NN may receive messages from DNs for blocks which 
 the standby NN is not yet aware of. It queues up these messages and replays 
 them when it next reads from the edit log or fails over. On a failover, all 
 of these pending DN messages must be processed successfully in order for the 
 failover to succeed. If one of these pending DN messages refers to a DN 
 storageId that no longer exists (because the DN with that transfer address 
 has been reformatted and has re-registered with the same transfer address) 
 then on transition to active the NN will not be able to process this DN 
 message and will suicide with an error like the following:
 {noformat}
 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
 (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
 shutdown. Shutting down immediately.
 java.io.IOException: Cannot mark 
 blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
 127.0.0.1:33324 does not exist
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6203) check other namenode's state before HAadmin transitionToActive

2014-04-25 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HDFS-6203:


Assignee: Rushabh S Shah  (was: Kihwal Lee)

 check other namenode's state before HAadmin transitionToActive
 --

 Key: HDFS-6203
 URL: https://issues.apache.org/jira/browse/HDFS-6203
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha
Affects Versions: 2.3.0
Reporter: patrick white
Assignee: Rushabh S Shah

 Current behavior is that the HAadmin -transitionToActive command will 
 complete the transition to Active even if the other namenode is already in 
 Active state. This is not an allowed condition and should be handled by 
 fencing, however setting both namenode's active can happen accidentally with 
 relative ease, especially in a production environment when performing manual 
 maintenance operations. 
 If this situation does occur it is very serious and will likely cause data 
 loss, or best case, require a difficult recovery to avoid data loss.
 This is requesting an enhancement to haadmin's -transitionToActive command, 
 to have HAadmin check the Active state of the other namenode before 
 completing the transition. If the other namenode is Active, then fail the 
 request due to other nn already-active.
 Not sure if there is a scenario where both namenode's being Active is valid 
 or desired, but to maintain functional compatibility a 'force' parameter 
 could be added to  override this check and allow previous behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-2949) HA: Add check to active state transition to prevent operator-induced split brain

2014-04-25 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah reassigned HDFS-2949:


Assignee: Rushabh S Shah  (was: Kihwal Lee)

 HA: Add check to active state transition to prevent operator-induced split 
 brain
 

 Key: HDFS-2949
 URL: https://issues.apache.org/jira/browse/HDFS-2949
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ha, namenode
Affects Versions: 0.24.0
Reporter: Todd Lipcon
Assignee: Rushabh S Shah

 Currently, if the administrator mistakenly calls -transitionToActive on one 
 NN while the other one is still active, all hell will break loose. We can add 
 a simple check by having the NN make a getServiceState() RPC to its peer with 
 a short (~1 second?) timeout. If the RPC succeeds and indicates the other 
 node is active, it should refuse to enter active mode. If the RPC fails or 
 indicates standby, it can proceed.
 This is just meant as a preventative safety check - we still expect users to 
 use the -failover command which has other checks plus fencing built in.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >