[jira] [Updated] (HDFS-2547) Design doc is wrong about default block placement policy.

2011-11-10 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2547:
--

Resolution: Invalid
Status: Resolved  (was: Patch Available)

The ReplicationTargetChooser comments are incorrect, and lead to this 
confusion. Resolving as invalid. The documented behavior is correct for all of 
the common cases.

 Design doc is wrong about default block placement policy.
 -

 Key: HDFS-2547
 URL: https://issues.apache.org/jira/browse/HDFS-2547
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2547.patch


 bq. For the common case, when the replication factor is three, HDFS's 
 placement policy is to put one replica on one node in the local rack, another 
 on a node in a different (remote) rack, and the last on a different node in 
 the same *remote* rack.
 Should actually be: and the last on a different node in the same *local* 
 rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2115) Transparent compression in HDFS

2011-11-10 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147825#comment-13147825
 ] 

Suresh Srinivas commented on HDFS-2115:
---

Todd, given how this functionality shapes up, it could make lot of changes to 
HDFS. Please post a design document, when the mechanism is in reasonable shape.

 Transparent compression in HDFS
 ---

 Key: HDFS-2115
 URL: https://issues.apache.org/jira/browse/HDFS-2115
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, hdfs client
Reporter: Todd Lipcon

 In practice, we find that a lot of users store text data in HDFS without 
 using any compression codec. Improving usability of compressible formats like 
 Avro/RCFile helps with this, but we could also help many users by providing 
 an option to transparently compress data as it is stored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2542) Transparent compression storage in HDFS

2011-11-10 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147833#comment-13147833
 ] 

Suresh Srinivas commented on HDFS-2542:
---

HDFS-2115 had lot smaller scope than the problem being solved here.

While the description of the jira starts off the discussion, there are lot of 
details to be covered. Some of the questions I am left with is:
# Post compression, the block files have completely different length. The 
length tracked at NN for the blocks is no longer valid.
# What is the state of the file during compression?
# How do you deal with data that was deemed cold, that could become hot at a 
later point?
# How does Datanode block scanner and directory scanner, internal datanode data 
structures that track block length, Append interact with this feature?

Given that, based on the approach taken, this could result in changes to some 
core parts of HDFS, please write a design document. Alternatively should we 
look at an external tool that can do this analysis and compress the files, 
based on HDFS-2115 mechanism proposed by Todd, to minimize the impact to HDFS 
core code?


 Transparent compression storage in HDFS
 ---

 Key: HDFS-2542
 URL: https://issues.apache.org/jira/browse/HDFS-2542
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: jinglong.liujl

 As HDFS-2115, we want to provide a mechanism to improve storage usage in hdfs 
 by compression. Different from HDFS-2115, this issue focus on compress 
 storage. Some idea like below:
 To do:
 1. compress cold data.
Cold data: After writing (or last read), data has not touched by anyone 
 for a long time.
Hot data: After writing, many client will read it , maybe it'll delele 
 soon.

Because hot data compression is not cost-effective,  we only compress cold 
 data. 
In some cases, some data in file can be access in high frequency,  but in 
 the same file, some data may be cold data. 
 To distinguish them, we compress in block level.
 2. compress data which has high compress ratio.
To specify high/low compress ratio, we should try to compress data, if 
 compress ratio is too low, we'll never compress them.
 2. forward compatibility.
 After compression, data format in datanode has changed. Old client will 
 not access them. To solve this issue, we provide a mechanism which decompress 
 on datanode.
 3. support random access and append.
As HDFS-2115, random access can be support by index. We separate data 
 before compress by fixed-length (we call these fixed-length data as chunk), 
 every chunk has its index.
 When random access, we can seek to the nearest index, and read this chunk for 
 precise position.   
 4. async compress to avoid compression slow down running job.
In practice, we found the cluster CPU usage is not uniform. Some clusters 
 are idle at night, and others are idle at afternoon. We should make compress 
 task running in full speed when cluster idle, and in low speed when cluster 
 busy.
 Will do:
 1. client specific codec and support  compress transmission.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2542) Transparent compression storage in HDFS

2011-11-10 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147891#comment-13147891
 ] 

Hari Mankude commented on HDFS-2542:


Adding to Suresh's comments, one of the key goals of compression is space 
reclamation. Given the hdfs has rigid notions of block sizes, compression could 
leave the filesystem with varied hdfs block sizes and NN has to be aware of the 
varied block sizes. NN needs to be able to reclaim the storage. 

The other problem is that when data becomes hot again sometime in the future, 
filesystem needs to have space to store uncompressed version of the block.

Data deduplication is another approach that can be combined with compression to 
reduce the storage footprint.



 Transparent compression storage in HDFS
 ---

 Key: HDFS-2542
 URL: https://issues.apache.org/jira/browse/HDFS-2542
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: jinglong.liujl

 As HDFS-2115, we want to provide a mechanism to improve storage usage in hdfs 
 by compression. Different from HDFS-2115, this issue focus on compress 
 storage. Some idea like below:
 To do:
 1. compress cold data.
Cold data: After writing (or last read), data has not touched by anyone 
 for a long time.
Hot data: After writing, many client will read it , maybe it'll delele 
 soon.

Because hot data compression is not cost-effective,  we only compress cold 
 data. 
In some cases, some data in file can be access in high frequency,  but in 
 the same file, some data may be cold data. 
 To distinguish them, we compress in block level.
 2. compress data which has high compress ratio.
To specify high/low compress ratio, we should try to compress data, if 
 compress ratio is too low, we'll never compress them.
 2. forward compatibility.
 After compression, data format in datanode has changed. Old client will 
 not access them. To solve this issue, we provide a mechanism which decompress 
 on datanode.
 3. support random access and append.
As HDFS-2115, random access can be support by index. We separate data 
 before compress by fixed-length (we call these fixed-length data as chunk), 
 every chunk has its index.
 When random access, we can seek to the nearest index, and read this chunk for 
 precise position.   
 4. async compress to avoid compression slow down running job.
In practice, we found the cluster CPU usage is not uniform. Some clusters 
 are idle at night, and others are idle at afternoon. We should make compress 
 task running in full speed when cluster idle, and in low speed when cluster 
 busy.
 Will do:
 1. client specific codec and support  compress transmission.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2542) Transparent compression storage in HDFS

2011-11-10 Thread Andrew Purtell (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147928#comment-13147928
 ] 

Andrew Purtell commented on HDFS-2542:
--

bq. Data deduplication is another approach that can be combined with 
compression to reduce the storage footprint.

Dedup seems a strategy contrary to the basic rationale of HDFS providing 
reliable storage. Instead of one missing block corrupting one file, it may 
impact many, perhaps hundreds, thousands.



 Transparent compression storage in HDFS
 ---

 Key: HDFS-2542
 URL: https://issues.apache.org/jira/browse/HDFS-2542
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: jinglong.liujl

 As HDFS-2115, we want to provide a mechanism to improve storage usage in hdfs 
 by compression. Different from HDFS-2115, this issue focus on compress 
 storage. Some idea like below:
 To do:
 1. compress cold data.
Cold data: After writing (or last read), data has not touched by anyone 
 for a long time.
Hot data: After writing, many client will read it , maybe it'll delele 
 soon.

Because hot data compression is not cost-effective,  we only compress cold 
 data. 
In some cases, some data in file can be access in high frequency,  but in 
 the same file, some data may be cold data. 
 To distinguish them, we compress in block level.
 2. compress data which has high compress ratio.
To specify high/low compress ratio, we should try to compress data, if 
 compress ratio is too low, we'll never compress them.
 2. forward compatibility.
 After compression, data format in datanode has changed. Old client will 
 not access them. To solve this issue, we provide a mechanism which decompress 
 on datanode.
 3. support random access and append.
As HDFS-2115, random access can be support by index. We separate data 
 before compress by fixed-length (we call these fixed-length data as chunk), 
 every chunk has its index.
 When random access, we can seek to the nearest index, and read this chunk for 
 precise position.   
 4. async compress to avoid compression slow down running job.
In practice, we found the cluster CPU usage is not uniform. Some clusters 
 are idle at night, and others are idle at afternoon. We should make compress 
 task running in full speed when cluster idle, and in low speed when cluster 
 busy.
 Will do:
 1. client specific codec and support  compress transmission.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2542) Transparent compression storage in HDFS

2011-11-10 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13147992#comment-13147992
 ] 

Hari Mankude commented on HDFS-2542:


Dedup blocks would be stored in a hdfs filesystem with 3 replicas. In fact, if 
the deduped block is a hot block with lots of references, replica count can 
be increased for those blocks as a policy setting.

 Transparent compression storage in HDFS
 ---

 Key: HDFS-2542
 URL: https://issues.apache.org/jira/browse/HDFS-2542
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: jinglong.liujl

 As HDFS-2115, we want to provide a mechanism to improve storage usage in hdfs 
 by compression. Different from HDFS-2115, this issue focus on compress 
 storage. Some idea like below:
 To do:
 1. compress cold data.
Cold data: After writing (or last read), data has not touched by anyone 
 for a long time.
Hot data: After writing, many client will read it , maybe it'll delele 
 soon.

Because hot data compression is not cost-effective,  we only compress cold 
 data. 
In some cases, some data in file can be access in high frequency,  but in 
 the same file, some data may be cold data. 
 To distinguish them, we compress in block level.
 2. compress data which has high compress ratio.
To specify high/low compress ratio, we should try to compress data, if 
 compress ratio is too low, we'll never compress them.
 2. forward compatibility.
 After compression, data format in datanode has changed. Old client will 
 not access them. To solve this issue, we provide a mechanism which decompress 
 on datanode.
 3. support random access and append.
As HDFS-2115, random access can be support by index. We separate data 
 before compress by fixed-length (we call these fixed-length data as chunk), 
 every chunk has its index.
 When random access, we can seek to the nearest index, and read this chunk for 
 precise position.   
 4. async compress to avoid compression slow down running job.
In practice, we found the cluster CPU usage is not uniform. Some clusters 
 are idle at night, and others are idle at afternoon. We should make compress 
 task running in full speed when cluster idle, and in low speed when cluster 
 busy.
 Will do:
 1. client specific codec and support  compress transmission.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2542) Transparent compression storage in HDFS

2011-11-10 Thread Andrew Purtell (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148000#comment-13148000
 ] 

Andrew Purtell commented on HDFS-2542:
--

bq. Dedup blocks would be stored in a hdfs filesystem with 3 replicas. 

This was implied even so in my comment, obviously.



 Transparent compression storage in HDFS
 ---

 Key: HDFS-2542
 URL: https://issues.apache.org/jira/browse/HDFS-2542
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: jinglong.liujl

 As HDFS-2115, we want to provide a mechanism to improve storage usage in hdfs 
 by compression. Different from HDFS-2115, this issue focus on compress 
 storage. Some idea like below:
 To do:
 1. compress cold data.
Cold data: After writing (or last read), data has not touched by anyone 
 for a long time.
Hot data: After writing, many client will read it , maybe it'll delele 
 soon.

Because hot data compression is not cost-effective,  we only compress cold 
 data. 
In some cases, some data in file can be access in high frequency,  but in 
 the same file, some data may be cold data. 
 To distinguish them, we compress in block level.
 2. compress data which has high compress ratio.
To specify high/low compress ratio, we should try to compress data, if 
 compress ratio is too low, we'll never compress them.
 2. forward compatibility.
 After compression, data format in datanode has changed. Old client will 
 not access them. To solve this issue, we provide a mechanism which decompress 
 on datanode.
 3. support random access and append.
As HDFS-2115, random access can be support by index. We separate data 
 before compress by fixed-length (we call these fixed-length data as chunk), 
 every chunk has its index.
 When random access, we can seek to the nearest index, and read this chunk for 
 precise position.   
 4. async compress to avoid compression slow down running job.
In practice, we found the cluster CPU usage is not uniform. Some clusters 
 are idle at night, and others are idle at afternoon. We should make compress 
 task running in full speed when cluster idle, and in low speed when cluster 
 busy.
 Will do:
 1. client specific codec and support  compress transmission.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly

2011-11-10 Thread Jitendra Nath Pandey (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-2246:
---

Attachment: HDFS-2246-branch-0.20-security-205.1.patch

Updated patch addressing nic's comments.

 Shortcut a local client reads to a Datanodes files directly
 ---

 Key: HDFS-2246
 URL: https://issues.apache.org/jira/browse/HDFS-2246
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sanjay Radia
 Attachments: 0001-HDFS-347.-Local-reads.patch, 
 HDFS-2246-branch-0.20-security-205.1.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security.3.patch, 
 HDFS-2246-branch-0.20-security.no-softref.patch, 
 HDFS-2246-branch-0.20-security.patch, HDFS-2246.20s.1.patch, 
 HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, 
 HDFS-2246.20s.patch, localReadShortcut20-security.2patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2539:
-

Attachment: h2539_2010.patch
h2539_2010_0.20s.patch

h2539_2010_0.20s.patch
h2539_2010.patch

Unwrap RemoteException.

 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, h2539_2010_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly

2011-11-10 Thread Jitendra Nath Pandey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148076#comment-13148076
 ] 

Jitendra Nath Pandey commented on HDFS-2246:


{quote}
  Check DFS_CLIENT_READ_SHORTCIRCUIT when initializing userWithLocalPathAccess. 
What should happen if DFS_CLIENT_READ_SHORTCIRCUIT is false but 
DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY is set to some user?
{quote}
 Datanode will allow getBlockLocalPathInfo if the user has local path access. 
DFS_CLIENT_READ_SHORTCIRCUIT is a client side configuration and should be 
configured in the application using the HDFS client.

 Shortcut a local client reads to a Datanodes files directly
 ---

 Key: HDFS-2246
 URL: https://issues.apache.org/jira/browse/HDFS-2246
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sanjay Radia
 Attachments: 0001-HDFS-347.-Local-reads.patch, 
 HDFS-2246-branch-0.20-security-205.1.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security.3.patch, 
 HDFS-2246-branch-0.20-security.no-softref.patch, 
 HDFS-2246-branch-0.20-security.patch, HDFS-2246.20s.1.patch, 
 HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, 
 HDFS-2246.20s.patch, localReadShortcut20-security.2patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2548) SafeModeException cannot be unwrapped

2011-11-10 Thread Tsz Wo (Nicholas), SZE (Created) (JIRA)
SafeModeException cannot be unwrapped
-

 Key: HDFS-2548
 URL: https://issues.apache.org/jira/browse/HDFS-2548
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE


DFSClient try to unwrap SafeModeException.  It does not work since 
SafeModeException does not have the constructor SafeModeException(String).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2246) Shortcut a local client reads to a Datanodes files directly

2011-11-10 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148083#comment-13148083
 ] 

Todd Lipcon commented on HDFS-2246:
---

Can you please hold off until next week to commit this? Many of the HDFS 
developers were at ApacheCon and Hadoop World this week - I know I'd like a 
chance to review, but haven't yet.

I'd also like to see a patch for trunk before this is released in a maintenance 
series. The policies on maintenance releases say that we should not be 
including new features until they're in trunk, as I understand them.

 Shortcut a local client reads to a Datanodes files directly
 ---

 Key: HDFS-2246
 URL: https://issues.apache.org/jira/browse/HDFS-2246
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sanjay Radia
 Attachments: 0001-HDFS-347.-Local-reads.patch, 
 HDFS-2246-branch-0.20-security-205.1.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security-205.patch, 
 HDFS-2246-branch-0.20-security.3.patch, 
 HDFS-2246-branch-0.20-security.no-softref.patch, 
 HDFS-2246-branch-0.20-security.patch, HDFS-2246.20s.1.patch, 
 HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.4.txt, 
 HDFS-2246.20s.patch, localReadShortcut20-security.2patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148129#comment-13148129
 ] 

Hadoop QA commented on HDFS-2539:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12503298/h2539_2010.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 17 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestFileAppend2
  org.apache.hadoop.hdfs.TestBalancerBandwidth

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1551//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1551//console

This message is automatically generated.

 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, h2539_2010_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2547) Design doc is wrong about default block placement policy.

2011-11-10 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148176#comment-13148176
 ] 

Aaron T. Myers commented on HDFS-2547:
--

Hey Harsh, seems like we should fix the comments then. Want to re-title/re-open 
this JIRA for that purpose?

 Design doc is wrong about default block placement policy.
 -

 Key: HDFS-2547
 URL: https://issues.apache.org/jira/browse/HDFS-2547
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.20.1
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 0.24.0

 Attachments: HDFS-2547.patch


 bq. For the common case, when the replication factor is three, HDFS's 
 placement policy is to put one replica on one node in the local rack, another 
 on a node in a different (remote) rack, and the last on a different node in 
 the same *remote* rack.
 Should actually be: and the last on a different node in the same *local* 
 rack.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Jitendra Nath Pandey (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148203#comment-13148203
 ] 

Jitendra Nath Pandey commented on HDFS-2539:


In JspHelper#initUGI, for non secure case, authentication method should be 
simple.

 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, h2539_2010_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148250#comment-13148250
 ] 

Hadoop QA commented on HDFS-2539:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12503330/h2539_2010b.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 17 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestFileAppend
  org.apache.hadoop.hdfs.TestDFSRemove
  org.apache.hadoop.hdfs.TestFileAppend2
  org.apache.hadoop.hdfs.TestBalancerBandwidth

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1552//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1552//console

This message is automatically generated.

 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148251#comment-13148251
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-2539:
--

All the failed tests got Cannot lock storage errors in Jenkins.  It is 
nothing to do with the patch.

 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148253#comment-13148253
 ] 

Hudson commented on HDFS-2539:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1338 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1338/])
HDFS-2539. Support doAs and GETHOMEDIRECTORY in webhdfs.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1200731
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148254#comment-13148254
 ] 

Hudson commented on HDFS-2539:
--

Integrated in Hadoop-Common-trunk-Commit #1264 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1264/])
HDFS-2539. Support doAs and GETHOMEDIRECTORY in webhdfs.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1200731
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148257#comment-13148257
 ] 

Hudson commented on HDFS-2539:
--

Integrated in Hadoop-Hdfs-0.23-Commit #163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/163/])
svn merge -c 1200731 from trunk for HDFS-2539.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1200734
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2539:
-

   Resolution: Fixed
Fix Version/s: 0.23.1
   0.24.0
   0.23.0
   0.20.206.0
   0.20.205.1
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this.

 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0, 0.23.1

 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148259#comment-13148259
 ] 

Hudson commented on HDFS-2539:
--

Integrated in Hadoop-Common-0.23-Commit #164 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/164/])
svn merge -c 1200731 from trunk for HDFS-2539.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1200734
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0, 0.23.1

 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148264#comment-13148264
 ] 

Hudson commented on HDFS-2539:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1286 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1286/])
HDFS-2539. Support doAs and GETHOMEDIRECTORY in webhdfs.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1200731
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0, 0.23.1

 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2539) Support doAs and GETHOMEDIRECTORY in webhdfs

2011-11-10 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13148265#comment-13148265
 ] 

Hudson commented on HDFS-2539:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #175 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/175/])
svn merge -c 1200731 from trunk for HDFS-2539.

szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1200734
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DoAsParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/ExceptionHandler.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationTokenForProxyUser.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


 Support doAs and GETHOMEDIRECTORY in webhdfs
 

 Key: HDFS-2539
 URL: https://issues.apache.org/jira/browse/HDFS-2539
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.20.205.1, 0.20.206.0, 0.23.0, 0.24.0, 0.23.1

 Attachments: h2539_2008.patch, h2539_2008_0.20s.patch, 
 h2539_2008_0.20s.patch, h2539_2009.patch, h2539_2009_0.20s.patch, 
 h2539_2009b.patch, h2539_2009b_0.20s.patch, h2539_2009c.patch, 
 h2539_2009c_0.20s.patch, h2539_2010.patch, 
 h2539_2010_0.20s.patch, h2539_2010b.patch, h2539_2010b_0.20s.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira