[jira] [Commented] (HDFS-3077) Quorum-based protocol for reading and writing edit logs

2012-06-26 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401969#comment-13401969
 ] 

Todd Lipcon commented on HDFS-3077:
---

bq. I want to understand how the changes can be reconciled with 3092. Currently 
BackupNode is being updated to used the JournalService. 

The journal interface as exposed by the quorum-capable Journal Node looks 
different enough from the BackupNode that I don't see any merit to combining 
the IPC protocols. It only muddies the interaction, IMO. For example, the 
QJournalProtocol has the concept of a "journal ID" so that each JournalNode can 
host journals for multiple namespaces at once, as well as the epoch concept 
which makes no sense in a BackupNode scenario. If we wanted to extend HDFS to 
act more like a true quorum-driven system (a la ZooKeeper) where each of the 
nodes maintains a full namespace as equal peers, we'd need to do more work on 
the commit protocol (eg adding an explicit "commit" RPC distinct from 
"journal"). That kind of change hasn't been proposed anywhere that I'm aware 
of, so I didn't want to complicate this design by considering it.

At this point I would advocate removing the BackupNode entirely, as I don't 
know of a single person using it for the last ~2 years since it was introduced. 
But, that's a separate discussion.

bq. Once this is done, we were planning to merge 3092 into trunk. How should we 
proceed to merge 3077 and 3092 to trunk?

I used a bunch of the HDFS-3092 branch code and design in development of this 
JIRA, so I would consider it to be "incorporated" into the 3077 branch already. 
So, I would advocate abandoning the current 3092 branch as a stepping stone 
(server-side-only) along the way to the full solution (server and client side 
implementation). Of course I'll make sure that Brandon and Hari are given their 
due credit as co-authors of this patch.

bq. Is code review going to be based off of this or code changes into a branch 
on Apache Hadoop code base?

I posted the git branch just for reference, since some contributors find it 
easier to do a git pull rather than manually apply the patches locally for 
review. But the link above is to the exact same code I've attached to the JIRA. 
Feel free to review by looking at the patch or at the branch. Would it be 
helpful for me to make a branch in SVN and push the pre-review patch series 
there for review instead of the external github? Let me know.

> Quorum-based protocol for reading and writing edit logs
> ---
>
> Key: HDFS-3077
> URL: https://issues.apache.org/jira/browse/HDFS-3077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3077-partial.txt, hdfs-3077.txt, 
> qjournal-design.pdf, qjournal-design.pdf
>
>
> Currently, one of the weak points of the HA design is that it relies on 
> shared storage such as an NFS filer for the shared edit log. One alternative 
> that has been proposed is to depend on BookKeeper, a ZooKeeper subproject 
> which provides a highly available replicated edit log on commodity hardware. 
> This JIRA is to implement another alternative, based on a quorum commit 
> protocol, integrated more tightly in HDFS and with the requirements driven 
> only by HDFS's needs rather than more generic use cases. More details to 
> follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3574) Fix small race and do some cleanup in GetImageServlet

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401962#comment-13401962
 ] 

Hadoop QA commented on HDFS-3574:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533590/hdfs-3574.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 2 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2709//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2709//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2709//console

This message is automatically generated.

> Fix small race and do some cleanup in GetImageServlet
> -
>
> Key: HDFS-3574
> URL: https://issues.apache.org/jira/browse/HDFS-3574
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3574.txt
>
>
> There's a very small race window in GetImageServlet, if the following 
> interleaving occurs:
> - The Storage object returns some local file in the storage directory (eg an 
> edits file or image file)
> - *Race*: some other process removes the file
> - GetImageServlet calls file.length() which returns 0, since it doesn't 
> exist. It thus faithfully sets the Content-Length header to 0
> - getFileClient() throws FileNotFoundException when trying to open the file. 
> But, since we call response.getOutputStream() before this, the headers have 
> already been sent, so we fail to send the "404" or "500" response that we 
> should.
> Thus, the client sees a 0-length Content-Length followed by 0 lengths of 
> content, and thinks it successfully has downloaded the target file, where in 
> fact it downloads an empty one.
> I saw this in practice during the "edits synchronization" phase of recovery 
> while working on HDFS-3077, though it could apply on existing code paths, as 
> well, I believe.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3571) Allow EditLogFileInputStream to read from a remote URL

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401959#comment-13401959
 ] 

Hadoop QA commented on HDFS-3571:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533579/hdfs-3571.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2708//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2708//console

This message is automatically generated.

> Allow EditLogFileInputStream to read from a remote URL
> --
>
> Key: HDFS-3571
> URL: https://issues.apache.org/jira/browse/HDFS-3571
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3571.txt
>
>
> In order to start up from remote edits storage (like the JournalNodes of 
> HDFS-3077), the NN needs to be able to load edits from a URL, instead of just 
> local disk. This JIRA extends EditLogFileInputStream to be able to use a URL 
> reference in addition to the current File reference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3077) Quorum-based protocol for reading and writing edit logs

2012-06-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401956#comment-13401956
 ] 

Suresh Srinivas commented on HDFS-3077:
---

bq. I would have liked to use the code exactly as it was, but the differences 
in design made it too difficult to try to reconcile, and I ended up 
copy-pasting and modifying rather than patching against that branch.
Todd, 3092 focused mainly on the server side. Some of the client side, we 
abandoned given the work in 3077. I want to understand how the changes can be 
reconciled with 3092. Currently BackupNode is being updated to used the 
JournalService. Once this is done, we were planning to merge 3092 into trunk. 
How should we proceed to merge 3077 and 3092 to trunk?

bq. Andy asked me to post a link to the corresponding github branch: 
https://github.com/toddlipcon/hadoop-common/commits/qjm-patchseries
Is code review going to be based off of this or code changes into a branch on 
Apache Hadoop code base?

> Quorum-based protocol for reading and writing edit logs
> ---
>
> Key: HDFS-3077
> URL: https://issues.apache.org/jira/browse/HDFS-3077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3077-partial.txt, hdfs-3077.txt, 
> qjournal-design.pdf, qjournal-design.pdf
>
>
> Currently, one of the weak points of the HA design is that it relies on 
> shared storage such as an NFS filer for the shared edit log. One alternative 
> that has been proposed is to depend on BookKeeper, a ZooKeeper subproject 
> which provides a highly available replicated edit log on commodity hardware. 
> This JIRA is to implement another alternative, based on a quorum commit 
> protocol, integrated more tightly in HDFS and with the requirements driven 
> only by HDFS's needs rather than more generic use cases. More details to 
> follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3564) Make the replication policy pluggable to allow custom replication policies

2012-06-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401954#comment-13401954
 ] 

Harsh J commented on HDFS-3564:
---

Nicholas,

Understood from target list that this is for branch-1 (which is also why I 
didn't close it, but just asked). Thank you for clarifying! :)

> Make the replication policy pluggable to allow custom replication policies
> --
>
> Key: HDFS-3564
> URL: https://issues.apache.org/jira/browse/HDFS-3564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Sumadhur Reddy Bolli
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> ReplicationTargetChooser currently determines the placement of replicas in 
> hadoop. Making the replication policy pluggable would help in having custom 
> replication policies that suit the environment. 
> Eg1: Enabling placing replicas across different datacenters(not just racks)
> Eg2: Enabling placing replicas across multiple(more than 2) racks
> Eg3: Cloud environments like azure have logical concepts like fault and 
> upgrade domains. Each fault domain spans multiple upgrade domains and each 
> upgrade domain spans multiple fault domains. Machines are spread typically 
> evenly across both fault and upgrade domains. Fault domain failures are 
> typically catastrophic/unplanned failures and data loss possibility is high. 
> An upgrade domain can be taken down by azure for maintenance periodically. 
> Each time an upgrade domain is taken down a small percentage of machines in 
> the upgrade domain(typically 1-2%) are replaced due to disk failures, thus 
> losing data. Assuming the default replication factor 3, any 3 data nodes 
> going down at the same time would mean potential data loss. So, it is 
> important to have a policy that spreads replicas across both fault and 
> upgrade domains to ensure practically no data loss. The problem here is two 
> dimensional and the default policy in hadoop is one-dimensional. Custom 
> policies to address issues like these can be written if we make the policy 
> pluggable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3551) WebHDFS CREATE does not use client location for redirection

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401955#comment-13401955
 ] 

Hudson commented on HDFS-3551:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2411 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2411/])
HDFS-3551. WebHDFS CREATE should use client location for HTTP redirection. 
(Revision 1354316)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1354316
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/Host2NodesMap.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/web
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/web/resources
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/web/resources/TestWebHdfsDataLocality.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


> WebHDFS CREATE does not use client location for redirection
> ---
>
> Key: HDFS-3551
> URL: https://issues.apache.org/jira/browse/HDFS-3551
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3551_20120620.patch, h3551_20120625.patch, 
> h3551_20120626.patch
>
>
> CREATE currently redirects client to a random datanode but not using the 
> client location information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3567) Provide a way to enforce clearing of trash data immediately

2012-06-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401953#comment-13401953
 ] 

Harsh J commented on HDFS-3567:
---

I thought of {{dfsadmin}} being the only target that can have this, given that 
other FS (local, for ex.) don't have an 'admin' layer. This sort of targets the 
Emptier thread run at the NameNode, no?

We can move it to common if that makes more sense.

> Provide a way to enforce clearing of trash data immediately
> ---
>
> Key: HDFS-3567
> URL: https://issues.apache.org/jira/browse/HDFS-3567
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Priority: Minor
>
> As discussed at http://search-hadoop.com/m/r1lMa13eN7O, it would be good to 
> have a dfsadmin sub-command (or similar) that admins can use to enforce a 
> trash emptier option from the NameNode, instead of waiting for the trash 
> clearance interval to pass. Can come handy when attempting to quickly delete 
> away data in a filling up cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3077) Quorum-based protocol for reading and writing edit logs

2012-06-26 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401925#comment-13401925
 ] 

Todd Lipcon commented on HDFS-3077:
---

Andy asked me to post a link to the corresponding github branch: 
https://github.com/toddlipcon/hadoop-common/commits/qjm-patchseries
I'm also going to try to write up a brief "code tour" of how it might make 
sense to look through this (in addition to improving the javadoc/comments a bit 
further in the next rev)

> Quorum-based protocol for reading and writing edit logs
> ---
>
> Key: HDFS-3077
> URL: https://issues.apache.org/jira/browse/HDFS-3077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3077-partial.txt, hdfs-3077.txt, 
> qjournal-design.pdf, qjournal-design.pdf
>
>
> Currently, one of the weak points of the HA design is that it relies on 
> shared storage such as an NFS filer for the shared edit log. One alternative 
> that has been proposed is to depend on BookKeeper, a ZooKeeper subproject 
> which provides a highly available replicated edit log on commodity hardware. 
> This JIRA is to implement another alternative, based on a quorum commit 
> protocol, integrated more tightly in HDFS and with the requirements driven 
> only by HDFS's needs rather than more generic use cases. More details to 
> follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3188) Add infrastructure for waiting for a quorum of ListenableFutures to respond

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3188:
--

Resolution: Incomplete
Status: Resolved  (was: Patch Available)

I ended up merging this into a larger patch attached to HDFS-3077

> Add infrastructure for waiting for a quorum of ListenableFutures to respond
> ---
>
> Key: HDFS-3188
> URL: https://issues.apache.org/jira/browse/HDFS-3188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3188.txt
>
>
> This JIRA adds the {{QuorumCall}} class which is used in HDFS-3077. As 
> described in the design document, this class allows a set of 
> ListenableFutures to be wrapped, and the caller can wait for a specific 
> number of responses, or a timeout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-3189) Add preliminary QJournalProtocol interface, translators

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-3189.
---

Resolution: Incomplete

i ended up doing this as part of a larger patch on HDFS-3077

> Add preliminary QJournalProtocol interface, translators
> ---
>
> Key: HDFS-3189
> URL: https://issues.apache.org/jira/browse/HDFS-3189
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3189-prelim.txt
>
>
> This JIRA is to add the preliminary code for the QJournalProtocol. This 
> protocol differs from JournalProtocol in the following ways:
> - each call has context information indicating the epoch number of the 
> requester
> - it contains calls that are specific to epoch number generation, etc, which 
> do not apply to other journaling daemons such as the BackupNode
> My guess is that, at some point, we can merge back down to one protocol, but 
> during the initial implementation phase, it will be useful to have a distinct 
> protocol for this project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3077) Quorum-based protocol for reading and writing edit logs

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3077:
--

Attachment: hdfs-3077.txt

Here is an initial patch with the implementation of this design. It is not 
complete, but I'm posting it here as it's already grown large, and I'd like to 
start the review process while I continue to add test coverage and iron out 
various TODOs which are littered around the code.

As it is, the code can be run, and I can successfully start/restart NNs, fail 
JNs, etc, and it mostly "works as advertised". There are known deficiencies 
which I'm working on addressing, and these should mostly be marked by TODOs.

This patch is on top of the following:

ffcfc55 HDFS-3190. 1: Extract code to atomically write a file containing a long
025759c HDFS-3571. Add URL support to EditLogFileInputStream
707a309 HDFS-3572. Clean up init of SPNEGO
d84516f HDFS-3573. Change instantiation of journal managers to have NSInfo
f61dc7d HDFS-3574. Fix race in GetImageServlet where file is removed during 
header-setting
(and those on top of trunk).

I did not end up basing this on the HDFS-3092 branch as I originally planned, 
though there's a bunch of code borrowed from the early work done on that branch 
by Brandon and Hari. I would have liked to use the code exactly as it was, but 
the differences in design made it too difficult to try to reconcile, and I 
ended up copy-pasting and modifying rather than patching against that branch. 
(for example, all of the RPCs in this design go through an async queue in order 
to do quorum writes)

Of course there will be follow-up work to create a test plan, add substantially 
more tests, add docs, etc. But my hope is that, after review, we can commit 
this (and the prereq patches) either to trunk or a branch and work from there 
to fix the remaining work items, test, etc.

> Quorum-based protocol for reading and writing edit logs
> ---
>
> Key: HDFS-3077
> URL: https://issues.apache.org/jira/browse/HDFS-3077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3077-partial.txt, hdfs-3077.txt, 
> qjournal-design.pdf, qjournal-design.pdf
>
>
> Currently, one of the weak points of the HA design is that it relies on 
> shared storage such as an NFS filer for the shared edit log. One alternative 
> that has been proposed is to depend on BookKeeper, a ZooKeeper subproject 
> which provides a highly available replicated edit log on commodity hardware. 
> This JIRA is to implement another alternative, based on a quorum commit 
> protocol, integrated more tightly in HDFS and with the requirements driven 
> only by HDFS's needs rather than more generic use cases. More details to 
> follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3551) WebHDFS CREATE does not use client location for redirection

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401908#comment-13401908
 ] 

Hadoop QA commented on HDFS-3551:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533573/h3551_20120626.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2705//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2705//console

This message is automatically generated.

> WebHDFS CREATE does not use client location for redirection
> ---
>
> Key: HDFS-3551
> URL: https://issues.apache.org/jira/browse/HDFS-3551
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3551_20120620.patch, h3551_20120625.patch, 
> h3551_20120626.patch
>
>
> CREATE currently redirects client to a random datanode but not using the 
> client location information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3190) Simple refactors in existing NN code to assist QuorumJournalManager extension

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401904#comment-13401904
 ] 

Hadoop QA commented on HDFS-3190:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533572/hdfs-3190.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2704//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2704//console

This message is automatically generated.

> Simple refactors in existing NN code to assist QuorumJournalManager extension
> -
>
> Key: HDFS-3190
> URL: https://issues.apache.org/jira/browse/HDFS-3190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3190.txt, hdfs-3190.txt, hdfs-3190.txt, 
> hdfs-3190.txt
>
>
> This JIRA is for some simple refactors in the NN:
> - refactor the code which writes the seen_txid file in NNStorage into a new 
> "LongContainingFile" utility class. This is useful for the JournalNode to 
> atomically/durably record its last promised epoch
> - refactor the interface from FileJournalManager back to StorageDirectory to 
> use a StorageErrorReport interface. This allows FileJournalManager to be used 
> in isolation of a full StorageDirectory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3574) Fix small race and do some cleanup in GetImageServlet

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3574:
--

Attachment: hdfs-3574.txt

This patch fixes the above race as follows:

- after setting the headers, we check again to see that the file exists. If it 
doesn't exist at that point, we throw the FNFE before opening the response 
output stream. We pass the already-opened stream (from before the exists check) 
into {{getFileServer(...)}} so that we don't have a 
Time-of-check-to-time-of-use bug here.

I also did a little cleanup and made some stuff public for later use in 
HDFS-3077. I hope it's OK to do these trivial changes in this same JIRA. If 
it's a big problem I'll move them elsewhere.

Unfortunately I didn't write a unit test for this, as it's a somewhat difficult 
race.

> Fix small race and do some cleanup in GetImageServlet
> -
>
> Key: HDFS-3574
> URL: https://issues.apache.org/jira/browse/HDFS-3574
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3574.txt
>
>
> There's a very small race window in GetImageServlet, if the following 
> interleaving occurs:
> - The Storage object returns some local file in the storage directory (eg an 
> edits file or image file)
> - *Race*: some other process removes the file
> - GetImageServlet calls file.length() which returns 0, since it doesn't 
> exist. It thus faithfully sets the Content-Length header to 0
> - getFileClient() throws FileNotFoundException when trying to open the file. 
> But, since we call response.getOutputStream() before this, the headers have 
> already been sent, so we fail to send the "404" or "500" response that we 
> should.
> Thus, the client sees a 0-length Content-Length followed by 0 lengths of 
> content, and thinks it successfully has downloaded the target file, where in 
> fact it downloads an empty one.
> I saw this in practice during the "edits synchronization" phase of recovery 
> while working on HDFS-3077, though it could apply on existing code paths, as 
> well, I believe.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3574) Fix small race and do some cleanup in GetImageServlet

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3574:
--

Status: Patch Available  (was: Open)

> Fix small race and do some cleanup in GetImageServlet
> -
>
> Key: HDFS-3574
> URL: https://issues.apache.org/jira/browse/HDFS-3574
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3574.txt
>
>
> There's a very small race window in GetImageServlet, if the following 
> interleaving occurs:
> - The Storage object returns some local file in the storage directory (eg an 
> edits file or image file)
> - *Race*: some other process removes the file
> - GetImageServlet calls file.length() which returns 0, since it doesn't 
> exist. It thus faithfully sets the Content-Length header to 0
> - getFileClient() throws FileNotFoundException when trying to open the file. 
> But, since we call response.getOutputStream() before this, the headers have 
> already been sent, so we fail to send the "404" or "500" response that we 
> should.
> Thus, the client sees a 0-length Content-Length followed by 0 lengths of 
> content, and thinks it successfully has downloaded the target file, where in 
> fact it downloads an empty one.
> I saw this in practice during the "edits synchronization" phase of recovery 
> while working on HDFS-3077, though it could apply on existing code paths, as 
> well, I believe.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3574) Fix small race and do some cleanup in GetImageServlet

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3574:
--

Description: 
There's a very small race window in GetImageServlet, if the following 
interleaving occurs:
- The Storage object returns some local file in the storage directory (eg an 
edits file or image file)
- *Race*: some other process removes the file
- GetImageServlet calls file.length() which returns 0, since it doesn't exist. 
It thus faithfully sets the Content-Length header to 0
- getFileClient() throws FileNotFoundException when trying to open the file. 
But, since we call response.getOutputStream() before this, the headers have 
already been sent, so we fail to send the "404" or "500" response that we 
should.

Thus, the client sees a 0-length Content-Length followed by 0 lengths of 
content, and thinks it successfully has downloaded the target file, where in 
fact it downloads an empty one.

I saw this in practice during the "edits synchronization" phase of recovery 
while working on HDFS-3077, though it could apply on existing code paths, as 
well, I believe.

  was:
There's a very small race window in GetImageServlet, if the following 
interleaving occurs:
- The Storage object returns some local file in the storage directory (eg an 
edits file or image file)
- *Race*: some other process removes the file
- GetImageServlet calls file.length() which returns 0, since it doesn't exist. 
It thus faithfully sets the Content-Length header to 0
- getFileClient() throws FileNotFoundException when trying to open the file. 
But, since we call response.getOutputStream() before this, the headers have 
already been sent, so we fail to send the "404" or "500" response that we 
should.

Thus, the client sees a 0-length Content-Length followed by 0 lengths of 
content, and thinks it successfully has downloaded the target file, where in 
fact it downloads an empty one.

I have filed this as a subtask of HDFS-3077 since I saw it only in practice 
during the "edits synchronization" phase of recovery during that work. Though 
it could apply on existing code paths, as well, I believe.


> Fix small race and do some cleanup in GetImageServlet
> -
>
> Key: HDFS-3574
> URL: https://issues.apache.org/jira/browse/HDFS-3574
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
>
> There's a very small race window in GetImageServlet, if the following 
> interleaving occurs:
> - The Storage object returns some local file in the storage directory (eg an 
> edits file or image file)
> - *Race*: some other process removes the file
> - GetImageServlet calls file.length() which returns 0, since it doesn't 
> exist. It thus faithfully sets the Content-Length header to 0
> - getFileClient() throws FileNotFoundException when trying to open the file. 
> But, since we call response.getOutputStream() before this, the headers have 
> already been sent, so we fail to send the "404" or "500" response that we 
> should.
> Thus, the client sees a 0-length Content-Length followed by 0 lengths of 
> content, and thinks it successfully has downloaded the target file, where in 
> fact it downloads an empty one.
> I saw this in practice during the "edits synchronization" phase of recovery 
> while working on HDFS-3077, though it could apply on existing code paths, as 
> well, I believe.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3574) Fix small race and do some cleanup in GetImageServlet

2012-06-26 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3574:
-

 Summary: Fix small race and do some cleanup in GetImageServlet
 Key: HDFS-3574
 URL: https://issues.apache.org/jira/browse/HDFS-3574
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


There's a very small race window in GetImageServlet, if the following 
interleaving occurs:
- The Storage object returns some local file in the storage directory (eg an 
edits file or image file)
- *Race*: some other process removes the file
- GetImageServlet calls file.length() which returns 0, since it doesn't exist. 
It thus faithfully sets the Content-Length header to 0
- getFileClient() throws FileNotFoundException when trying to open the file. 
But, since we call response.getOutputStream() before this, the headers have 
already been sent, so we fail to send the "404" or "500" response that we 
should.

Thus, the client sees a 0-length Content-Length followed by 0 lengths of 
content, and thinks it successfully has downloaded the target file, where in 
fact it downloads an empty one.

I have filed this as a subtask of HDFS-3077 since I saw it only in practice 
during the "edits synchronization" phase of recovery during that work. Though 
it could apply on existing code paths, as well, I believe.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3573:
--

Attachment: hdfs-3573.txt

Had to add a shim in the BKJM constructor so its tests would compile. I also 
added a TODO as requested by ivank in an offline conversation for him to make 
use of this improvement within BKJM.

> Supply NamespaceInfo when instantiating JournalManagers
> ---
>
> Key: HDFS-3573
> URL: https://issues.apache.org/jira/browse/HDFS-3573
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3573.txt, hdfs-3573.txt
>
>
> Currently, the JournalManagers are instantiated before the NamespaceInfo is 
> loaded from local storage directories. This is problematic since the JM may 
> want to verify that the storage info associated with the journal matches the 
> NN which is starting up (eg to prevent an operator accidentally configuring 
> two clusters against the same remote journal storage). This JIRA rejiggers 
> the initialization sequence so that the JMs receive NamespaceInfo as a 
> constructor argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3190) Simple refactors in existing NN code to assist QuorumJournalManager extension

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3190:
--

Attachment: hdfs-3190.txt

Actually, went back to the original idea of a StorageErrorReporter interface. 
It makes the dependencies clearer between JournalManager and the Storage, which 
is nice when trying to unit-test a JournalManager in isolation. Otherwise this 
patch should be the same.

> Simple refactors in existing NN code to assist QuorumJournalManager extension
> -
>
> Key: HDFS-3190
> URL: https://issues.apache.org/jira/browse/HDFS-3190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3190.txt, hdfs-3190.txt, hdfs-3190.txt, 
> hdfs-3190.txt
>
>
> This JIRA is for some simple refactors in the NN:
> - refactor the code which writes the seen_txid file in NNStorage into a new 
> "LongContainingFile" utility class. This is useful for the JournalNode to 
> atomically/durably record its last promised epoch
> - refactor the interface from FileJournalManager back to StorageDirectory to 
> use a StorageErrorReport interface. This allows FileJournalManager to be used 
> in isolation of a full StorageDirectory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401881#comment-13401881
 ] 

Hadoop QA commented on HDFS-3573:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533581/hdfs-3573.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javac.  The patch appears to cause the build to fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2706//console

This message is automatically generated.

> Supply NamespaceInfo when instantiating JournalManagers
> ---
>
> Key: HDFS-3573
> URL: https://issues.apache.org/jira/browse/HDFS-3573
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3573.txt
>
>
> Currently, the JournalManagers are instantiated before the NamespaceInfo is 
> loaded from local storage directories. This is problematic since the JM may 
> want to verify that the storage info associated with the journal matches the 
> NN which is starting up (eg to prevent an operator accidentally configuring 
> two clusters against the same remote journal storage). This JIRA rejiggers 
> the initialization sequence so that the JMs receive NamespaceInfo as a 
> constructor argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3572) Cleanup code which inits SPNEGO in HttpServer

2012-06-26 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401877#comment-13401877
 ] 

Andy Isaacson commented on HDFS-3572:
-

LGTM. Thanks for rooting out the SPENGO.

> Cleanup code which inits SPNEGO in HttpServer
> -
>
> Key: HDFS-3572
> URL: https://issues.apache.org/jira/browse/HDFS-3572
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, security
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3572.txt
>
>
> Currently the code which inits the SPNEGO filter is duplicated between the 
> 2NN and NN. We should move this into the HttpServer utility class to clean it 
> up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3573:
--

Attachment: hdfs-3573.txt

> Supply NamespaceInfo when instantiating JournalManagers
> ---
>
> Key: HDFS-3573
> URL: https://issues.apache.org/jira/browse/HDFS-3573
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3573.txt
>
>
> Currently, the JournalManagers are instantiated before the NamespaceInfo is 
> loaded from local storage directories. This is problematic since the JM may 
> want to verify that the storage info associated with the journal matches the 
> NN which is starting up (eg to prevent an operator accidentally configuring 
> two clusters against the same remote journal storage). This JIRA rejiggers 
> the initialization sequence so that the JMs receive NamespaceInfo as a 
> constructor argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3573:
--

Status: Patch Available  (was: Open)

> Supply NamespaceInfo when instantiating JournalManagers
> ---
>
> Key: HDFS-3573
> URL: https://issues.apache.org/jira/browse/HDFS-3573
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3573.txt
>
>
> Currently, the JournalManagers are instantiated before the NamespaceInfo is 
> loaded from local storage directories. This is problematic since the JM may 
> want to verify that the storage info associated with the journal matches the 
> NN which is starting up (eg to prevent an operator accidentally configuring 
> two clusters against the same remote journal storage). This JIRA rejiggers 
> the initialization sequence so that the JMs receive NamespaceInfo as a 
> constructor argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-06-26 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3573:
-

 Summary: Supply NamespaceInfo when instantiating JournalManagers
 Key: HDFS-3573
 URL: https://issues.apache.org/jira/browse/HDFS-3573
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hdfs-3573.txt

Currently, the JournalManagers are instantiated before the NamespaceInfo is 
loaded from local storage directories. This is problematic since the JM may 
want to verify that the storage info associated with the journal matches the NN 
which is starting up (eg to prevent an operator accidentally configuring two 
clusters against the same remote journal storage). This JIRA rejiggers the 
initialization sequence so that the JMs receive NamespaceInfo as a constructor 
argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3572) Cleanup code which inits SPNEGO in HttpServer

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3572:
--

Attachment: hdfs-3572.txt

Attached patch refactors the common code into HttpServer and also fixes several 
places in which SPNEGO was misspelled as SPENGO.

> Cleanup code which inits SPNEGO in HttpServer
> -
>
> Key: HDFS-3572
> URL: https://issues.apache.org/jira/browse/HDFS-3572
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, security
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3572.txt
>
>
> Currently the code which inits the SPNEGO filter is duplicated between the 
> 2NN and NN. We should move this into the HttpServer utility class to clean it 
> up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3572) Cleanup code which inits SPNEGO in HttpServer

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3572:
--

Status: Patch Available  (was: Open)

> Cleanup code which inits SPNEGO in HttpServer
> -
>
> Key: HDFS-3572
> URL: https://issues.apache.org/jira/browse/HDFS-3572
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node, security
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3572.txt
>
>
> Currently the code which inits the SPNEGO filter is duplicated between the 
> 2NN and NN. We should move this into the HttpServer utility class to clean it 
> up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3572) Cleanup code which inits SPNEGO in HttpServer

2012-06-26 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3572:
-

 Summary: Cleanup code which inits SPNEGO in HttpServer
 Key: HDFS-3572
 URL: https://issues.apache.org/jira/browse/HDFS-3572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node, security
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


Currently the code which inits the SPNEGO filter is duplicated between the 2NN 
and NN. We should move this into the HttpServer utility class to clean it up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3537) put libhdfs source files in a directory named libhdfs

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401865#comment-13401865
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3537:
--

Eli, it sounds good.  Thanks for explaining it.

> put libhdfs source files in a directory named libhdfs
> -
>
> Key: HDFS-3537
> URL: https://issues.apache.org/jira/browse/HDFS-3537
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3537.001.patch
>
>
> Move libhdfs source files from main/native to main/native/libhdfs.  Rename 
> hdfs_read to libhdfs_test_read; rename hdfs_write to libhdfs_test_write.
> The rationale is that we'd like to add some other stuff under main/native 
> (like fuse_dfs) and it's nice to have separate things in separate directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3571) Allow EditLogFileInputStream to read from a remote URL

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3571:
--

Attachment: hdfs-3571.txt

> Allow EditLogFileInputStream to read from a remote URL
> --
>
> Key: HDFS-3571
> URL: https://issues.apache.org/jira/browse/HDFS-3571
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3571.txt
>
>
> In order to start up from remote edits storage (like the JournalNodes of 
> HDFS-3077), the NN needs to be able to load edits from a URL, instead of just 
> local disk. This JIRA extends EditLogFileInputStream to be able to use a URL 
> reference in addition to the current File reference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3571) Allow EditLogFileInputStream to read from a remote URL

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3571:
--

Status: Patch Available  (was: Open)

> Allow EditLogFileInputStream to read from a remote URL
> --
>
> Key: HDFS-3571
> URL: https://issues.apache.org/jira/browse/HDFS-3571
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hdfs-3571.txt
>
>
> In order to start up from remote edits storage (like the JournalNodes of 
> HDFS-3077), the NN needs to be able to load edits from a URL, instead of just 
> local disk. This JIRA extends EditLogFileInputStream to be able to use a URL 
> reference in addition to the current File reference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3564) Make the replication policy pluggable to allow custom replication policies

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401856#comment-13401856
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3564:
--

Harsh, this is not a dupe since this is for branch-1/branch-1-win.  We probably 
should first backport HDFS-385.

BTW, there is a an ongoing work on supporting different failure and locality 
topologies; see HADOOP-8468.

> Make the replication policy pluggable to allow custom replication policies
> --
>
> Key: HDFS-3564
> URL: https://issues.apache.org/jira/browse/HDFS-3564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Sumadhur Reddy Bolli
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> ReplicationTargetChooser currently determines the placement of replicas in 
> hadoop. Making the replication policy pluggable would help in having custom 
> replication policies that suit the environment. 
> Eg1: Enabling placing replicas across different datacenters(not just racks)
> Eg2: Enabling placing replicas across multiple(more than 2) racks
> Eg3: Cloud environments like azure have logical concepts like fault and 
> upgrade domains. Each fault domain spans multiple upgrade domains and each 
> upgrade domain spans multiple fault domains. Machines are spread typically 
> evenly across both fault and upgrade domains. Fault domain failures are 
> typically catastrophic/unplanned failures and data loss possibility is high. 
> An upgrade domain can be taken down by azure for maintenance periodically. 
> Each time an upgrade domain is taken down a small percentage of machines in 
> the upgrade domain(typically 1-2%) are replaced due to disk failures, thus 
> losing data. Assuming the default replication factor 3, any 3 data nodes 
> going down at the same time would mean potential data loss. So, it is 
> important to have a policy that spreads replicas across both fault and 
> upgrade domains to ensure practically no data loss. The problem here is two 
> dimensional and the default policy in hadoop is one-dimensional. Custom 
> policies to address issues like these can be written if we make the policy 
> pluggable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3551) WebHDFS CREATE does not use client location for redirection

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3551:
-

Attachment: h3551_20120626.patch

Thanks Suresh for the review.

h3551_20120626.patch: cleans up the imports and rewrites javadoc/comments.

> WebHDFS CREATE does not use client location for redirection
> ---
>
> Key: HDFS-3551
> URL: https://issues.apache.org/jira/browse/HDFS-3551
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3551_20120620.patch, h3551_20120625.patch, 
> h3551_20120626.patch
>
>
> CREATE currently redirects client to a random datanode but not using the 
> client location information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3190) Simple refactors in existing NN code to assist QuorumJournalManager extension

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3190:
--

Attachment: hdfs-3190.txt

Attaching new draft of this patch:

- I agreed with Colin that LongContainingFile was a bad name. I ended up 
renaming to PersistentLong, and clarified with JavaDoc what its purpose is. I 
also made it instantiable as a wrapper which holds a persisted long value -- 
this was useful in development of HDFS-3077 in order to hold the "promised 
epoch" persistent across restarts.

- Instead of introducing StorageErrorReporter, I just moved the error reporting 
functionality up into {{Storage}} instead of {{NNStorage}}. It seems like a 
generally useful thing -- in the future we may want to consolidate the 
error-tracking functionality between the DN and NN using this mechanism, for 
example. For now, the {{Storage}} implementation just logs the errors.

- Change TransferFsImage to now take a Storage instead of NNStorage. This is so 
that in HDFS-3077, we can download logs into a new {{JNStorage}} class.

- Move {{getFiles()}} from {{NNStorage}} into {{Storage}} since it's also 
generally useful and not NN-specific.

- Some minor refactor in {{TransferFsImage]} to make code more re-usable (also 
used for edits transfer in HDFS-3077).

While extracting PersistentLong, I also noticed a bug that, if there were an 
IOE while writing the file, it would still attempt to close the 
AtomicFileOutputStream. This could cause the incompletely written value to get 
incorrectly "committed". I added a simple "abort" function for this.

> Simple refactors in existing NN code to assist QuorumJournalManager extension
> -
>
> Key: HDFS-3190
> URL: https://issues.apache.org/jira/browse/HDFS-3190
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hdfs-3190.txt, hdfs-3190.txt, hdfs-3190.txt
>
>
> This JIRA is for some simple refactors in the NN:
> - refactor the code which writes the seen_txid file in NNStorage into a new 
> "LongContainingFile" utility class. This is useful for the JournalNode to 
> atomically/durably record its last promised epoch
> - refactor the interface from FileJournalManager back to StorageDirectory to 
> use a StorageErrorReport interface. This allows FileJournalManager to be used 
> in isolation of a full StorageDirectory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3571) Allow EditLogFileInputStream to read from a remote URL

2012-06-26 Thread Todd Lipcon (JIRA)
Todd Lipcon created HDFS-3571:
-

 Summary: Allow EditLogFileInputStream to read from a remote URL
 Key: HDFS-3571
 URL: https://issues.apache.org/jira/browse/HDFS-3571
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon


In order to start up from remote edits storage (like the JournalNodes of 
HDFS-3077), the NN needs to be able to load edits from a URL, instead of just 
local disk. This JIRA extends EditLogFileInputStream to be able to use a URL 
reference in addition to the current File reference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3567) Provide a way to enforce clearing of trash data immediately

2012-06-26 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401838#comment-13401838
 ] 

Tom White commented on HDFS-3567:
-

Trash is not HDFS-specific so this should be in Common, shouldn't it?

> Provide a way to enforce clearing of trash data immediately
> ---
>
> Key: HDFS-3567
> URL: https://issues.apache.org/jira/browse/HDFS-3567
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Priority: Minor
>
> As discussed at http://search-hadoop.com/m/r1lMa13eN7O, it would be good to 
> have a dfsadmin sub-command (or similar) that admins can use to enforce a 
> trash emptier option from the NameNode, instead of waiting for the trash 
> clearance interval to pass. Can come handy when attempting to quickly delete 
> away data in a filling up cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3510) Improve FSEditLog pre-allocation

2012-06-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401765#comment-13401765
 ] 

Suresh Srinivas commented on HDFS-3510:
---

Finally took some time to review the tests as well. Here are some comments:
# TestNamenodeRecovery
#* Since you have just modified the tests, could you please add some javadoc to 
EditLogTestSetup class. Also please add javadoc to newly added class 
EltsTestOpcodesAfterPadding.
#* you may want to move adding delete operation to a method, to avoid code 
duplication.
#* very minor - if padding length is zero, you may want to just return from 
#padEditLog()
# TestEditlogFileOutputStream
#* Can you please add some javadoc to the class.
#* testRawWrites() - you can move the code that performs setReadyToFlusy(), 
elos.flushAndSync() and length check to a method to avoid repeating the same 
code
#* You do not need try finally. @Before method deleted the file.



> Improve FSEditLog pre-allocation
> 
>
> Key: HDFS-3510
> URL: https://issues.apache.org/jira/browse/HDFS-3510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 1.0.0, 2.0.1-alpha
>
> Attachments: HDFS-3510-b1.001.patch, HDFS-3510-b1.002.patch, 
> HDFS-3510.001.patch, HDFS-3510.003.patch, HDFS-3510.004.patch, 
> HDFS-3510.004.patch, HDFS-3510.006.patch, HDFS-3510.007.patch, 
> HDFS-3510.008.patch, HDFS-3510.009.patch, HDFS-3510.010.patch
>
>
> It is good to avoid running out of space in the middle of writing a batch of 
> edits, because when it happens, we often get partial edits at the end of the 
> log.
> Edit log preallocation can solve this problem (see HADOOP-2330 for a full 
> description of edit log preallocation).
> The current pre-allocation code was introduced for performance reasons, not 
> for preventing partial edits.  As a consequence, we sometimes do a write 
> without using pre-allocation.  We should change the pre-allocation code so 
> that it always preallocates at least enough space before writing out the 
> edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-06-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401756#comment-13401756
 ] 

Allen Wittenauer commented on HDFS-2617:


Given that 2.x is a major release, it seems a reasonable time to break HFTP 
over KSSL especially given that one has to severely cripple their security in 
order to make secure Hadoop work on recent Kerberos implementations.  

It also seems reasonable to explain to users as part of their transition to 2.x 
from prior releases that this functionality is going away.  This primarily is 
going to sting the early adopters, an audience who has essentially volunteered 
to do be our lab rats.  But for the folks who favor stability, now is the time 
to get the word out to start switching to a 1.x branch with a working WebHDFS.  
By the time 2.0 is stable and/or ready for those people to deploy, they should 
be in relatively good shape.  

Something else to consider:  the impacted audience is likely low, as I suspect 
most people probably aren't running a 1.x release yet and/or have security 
turned on.  (I'd *love* to see some stats though.  I really hope I'm wrong.  
However knowing that it took us several months to transition from 0.20.2 to 
secure 1.x... and part of that time is directly correlated to the lack of the 
code in this patch... I have a feeling I'm correct.)

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
> HDFS-2617-config.patch, HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, 
> HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, hdfs-2617-1.1.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3481) Refactor HttpFS handling of JAX-RS query string parameters

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401757#comment-13401757
 ] 

Eli Collins commented on HDFS-3481:
---

Yea, I don't think there's a workaround since java doesn't allow generic array 
creation. In this case we know the type statically so you could put the puts in 
a static method with @SuppressWarnings({"unchecked"}), but IMO that's harder to 
read so the current code in your patch is better.  +1 to the latest patch.

> Refactor HttpFS handling of JAX-RS query string parameters
> --
>
> Key: HDFS-3481
> URL: https://issues.apache.org/jira/browse/HDFS-3481
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.1-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3481.patch, HDFS-3481.patch, HDFS-3481.patch
>
>
> Explicit parameters in the HttpFSServer became quite messy as they are the 
> union of all possible parameters for all operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-06-26 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401748#comment-13401748
 ] 

Kihwal Lee commented on HDFS-2617:
--

bq. unless there is a release that supports both

I meant supporting both SPNEGO and krb5ssl on Hftp.  If we don't have this, we 
can't try 2.0 until we deprecate Hftp and have all users transition to webhdfs 
on 1.x. It's doable but takes time. If Hftp in 2.0 was backward compatible, we 
would be able to have people move to webhdfs and also try 2.0 at the same time.

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
> HDFS-2617-config.patch, HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, 
> HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, hdfs-2617-1.1.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-06-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401731#comment-13401731
 ] 

Allen Wittenauer commented on HDFS-2617:


1.0.0/1.0.1/1.0.2/1.0.3 supports WebHDFS and HFTP on secure grids.

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
> HDFS-2617-config.patch, HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, 
> HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, hdfs-2617-1.1.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3491) HttpFs does not set permissions correctly

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401725#comment-13401725
 ] 

Eli Collins commented on HDFS-3491:
---

So there's no test that covers that the REST API actually does the intended 
operation? Thought there was a test for that.

> HttpFs does not set permissions correctly
> -
>
> Key: HDFS-3491
> URL: https://issues.apache.org/jira/browse/HDFS-3491
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Romain Rigaux
>Assignee: Alejandro Abdelnur
> Attachments: HDFS-3491.patch, HDFS-3491.patch
>
>
> HttpFs seems to have these problems:
> # can't set permissions to 777 at file creation or 1777 with setpermission
> # does not accept 01777 permissions (which is valid in WebHdfs)
> WebHdfs
> curl -X PUT 
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?permission=1777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> curl  
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1338581075040,"owner":"hue","pathSuffix":"","permission":"1777","replication":0,"type":"DIRECTORY"}}
> curl -X PUT 
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?permission=01777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> HttpFs
> curl -X PUT 
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?permission=1777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> curl  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"hue","group":"supergroup","permission":"755","accessTime":0,"modificationTime":1338580912205,"blockSize":0,"replication":0}}
> curl -X PUT  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=SETPERMISSION&PERMISSION=1777&user.name=hue&doas=hue";
> curl  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"hue","group":"supergroup","permission":"777","accessTime":0,"modificationTime":1338581075040,"blockSize":0,"replication":0}}
> curl -X PUT 
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?permission=01777&op=MKDIRS&user.name=hue&doas=hue";
> {"RemoteException":{"message":"java.lang.IllegalArgumentException: Parameter 
> [permission], invalid value [01777], value must be 
> [default|[0-1]?[0-7][0-7][0-7]]","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3559) DFSTestUtil: use Builder class to construct DFSTestUtil instances

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401724#comment-13401724
 ] 

Hadoop QA commented on HDFS-3559:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533535/HDFS-3559.003.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 9 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2703//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2703//console

This message is automatically generated.

> DFSTestUtil: use Builder class to construct DFSTestUtil instances
> -
>
> Key: HDFS-3559
> URL: https://issues.apache.org/jira/browse/HDFS-3559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3559.001.patch, HDFS-3559.002.patch, 
> HDFS-3559.003.patch
>
>
> The number of parameters in DFSTestUtil's constructor has grown over time.  
> It would be nice to have a Builder class similar to MiniDFSClusterBuilder, 
> which could construct an instance of DFSTestUtil.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3383) libhdfs does not build on ARM because jni_md.h is not found

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401713#comment-13401713
 ] 

Eli Collins commented on HDFS-3383:
---

Hey Trevor,

Is this still an issue now that we've converted over to cmake?

Thanks,
Eli

> libhdfs does not build on ARM because jni_md.h is not found
> ---
>
> Key: HDFS-3383
> URL: https://issues.apache.org/jira/browse/HDFS-3383
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 0.23.1
> Environment: Linux 3.2.0-1412-omap4 #16-Ubuntu SMP PREEMPT Tue Apr 17 
> 19:38:42 UTC 2012 armv7l armv7l armv7l GNU/Linux
> java version "1.7.0_04-ea"
> Java(TM) SE Runtime Environment for Embedded (build 1.7.0_04-ea-b20, headless)
> Java HotSpot(TM) Embedded Server VM (build 23.0-b21, mixed mode, experimental)
>Reporter: Trevor Robinson
> Attachments: HDFS-3383.patch
>
>
> The wrong include directory is used for jni_md.h:
> [INFO] --- make-maven-plugin:1.0-beta-1:make-install (compile) @ hadoop-hdfs 
> ---
> [INFO] /bin/bash ./libtool --tag=CC   --mode=compile gcc 
> -DPACKAGE_NAME=\"libhdfs\" -DPACKAGE_TARNAME=\"libhdfs\" 
> -DPACKAGE_VERSION=\"0.1.0\" -DPACKAGE_STRING=\"libhdfs\ 0.1.0\" 
> -DPACKAGE_BUGREPORT=\"omal...@apache.org\" -DPACKAGE_URL=\"\" 
> -DPACKAGE=\"libhdfs\" -DVERSION=\"0.1.0\" -DSTDC_HEADERS=1 
> -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 
> -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 
> -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -DHAVE_STRDUP=1 
> -DHAVE_STRERROR=1 -DHAVE_STRTOUL=1 -DHAVE_FCNTL_H=1 -DHAVE__BOOL=1 
> -DHAVE_STDBOOL_H=1 -I. -g -O2 -DOS_LINUX -DDSO_DLFCN -DCPU=\"arm\" 
> -I/usr/lib/jvm/ejdk1.7.0_04/include -I/usr/lib/jvm/ejdk1.7.0_04/include/arm 
> -Wall -Wstrict-prototypes -MT hdfs.lo -MD -MP -MF .deps/hdfs.Tpo -c -o 
> hdfs.lo hdfs.c
> [INFO] libtool: compile:  gcc -DPACKAGE_NAME=\"libhdfs\" 
> -DPACKAGE_TARNAME=\"libhdfs\" -DPACKAGE_VERSION=\"0.1.0\" 
> "-DPACKAGE_STRING=\"libhdfs 0.1.0\"" 
> -DPACKAGE_BUGREPORT=\"omal...@apache.org\" -DPACKAGE_URL=\"\" 
> -DPACKAGE=\"libhdfs\" -DVERSION=\"0.1.0\" -DSTDC_HEADERS=1 
> -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 
> -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 
> -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -DHAVE_STRDUP=1 
> -DHAVE_STRERROR=1 -DHAVE_STRTOUL=1 -DHAVE_FCNTL_H=1 -DHAVE__BOOL=1 
> -DHAVE_STDBOOL_H=1 -I. -g -O2 -DOS_LINUX -DDSO_DLFCN -DCPU=\"arm\" 
> -I/usr/lib/jvm/ejdk1.7.0_04/include -I/usr/lib/jvm/ejdk1.7.0_04/include/arm 
> -Wall -Wstrict-prototypes -MT hdfs.lo -MD -MP -MF .deps/hdfs.Tpo -c hdfs.c  
> -fPIC -DPIC -o .libs/hdfs.o
> [INFO] In file included from hdfs.h:33:0,
> [INFO]  from hdfs.c:19:
> [INFO] /usr/lib/jvm/ejdk1.7.0_04/include/jni.h:45:20: fatal error: jni_md.h: 
> No such file or directory
> [INFO] compilation terminated.
> [INFO] make: *** [hdfs.lo] Error 1
> The problem is caused by 
> hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4 overriding 
> supported_os=arm when host_cpu=arm*; supported_os should remain "linux", 
> since it determines the jni_md.h include path. OpenJDK 6 and 7 (in Ubuntu 
> 12.04, at least) and Oracle EJDK put jni_md.h in include/linux. Not sure 
> if/why this ever worked before.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-06-26 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401703#comment-13401703
 ] 

Kihwal Lee commented on HDFS-2617:
--

WebHDFS works, but we have customers who built their stuff around Hftp, which 
has served as a compatibility layer between different releases. This assumption 
is broken after this jira and it is a bit difficult to provide a  transition 
plan, unless there is a release that supports both.

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
> HDFS-2617-config.patch, HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, 
> HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, hdfs-2617-1.1.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3570) Balancer shouldn't rely on "DFS Space Used %" as that ignores non-DFS used space

2012-06-26 Thread Harsh J (JIRA)
Harsh J created HDFS-3570:
-

 Summary: Balancer shouldn't rely on "DFS Space Used %" as that 
ignores non-DFS used space
 Key: HDFS-3570
 URL: https://issues.apache.org/jira/browse/HDFS-3570
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Priority: Minor


Report from a user here: 
https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pIhNyDVxdVY/b7ENZmEvBjIJ,
 post archived at http://pastebin.com/eVFkk0A0

This user had a specific DN that had a large non-DFS usage among dfs.data.dirs, 
and very little DFS usage (which is computed against total possible capacity). 

Balancer apparently only looks at the usage, and ignores to consider that 
non-DFS usage may also be high on a DN/cluster. Hence, it thinks that if a DFS 
Usage report from DN is 8% only, its got a lot of free space to write more 
blocks, when that isn't true as shown by the case of this user. It went on 
scheduling writes to the DN to balance it out, but the DN simply can't accept 
any more blocks as a result of its disks' state.

I think it would be better if we _computed_ the actual utilization based on 
{{(100-(actual remaining space))/(capacity)}}, as opposed to the current {{(dfs 
used)/(capacity)}}. Thoughts?

This isn't very critical, however, cause it is very rare to see DN space being 
used for non DN data, but it does expose a valid bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3569) clean up isInfoEnabled usage re logAuditEvent

2012-06-26 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HDFS-3569:


Priority: Trivial  (was: Major)

decreasing priority to trivial. Thanks Suresh.

> clean up isInfoEnabled usage re logAuditEvent 
> --
>
> Key: HDFS-3569
> URL: https://issues.apache.org/jira/browse/HDFS-3569
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Trivial
>
> From HDFS-3535 we have
> {quote}
> Normally the checks are used before the method invocation if we're doing 
> expensive things to create the args (eg lots of string concatenation) not to 
> save the cost of the method invocation. Doesn't look like that's the case 
> here (we're not constructing args) so we could just call logAuditEvent 
> directly everywhere.
> There are a bunch of uses of logAuditEvent that do need to check if audit 
> logging is enabled before constructing log messages, etc. I considered 
> refactoring them all and concluded that it was out of scope for this change. 
> I decided not to change the existing idiom (verbose though it is) before 
> refactoring all users of the interface, which should be a separate change.
> {quote}
> There are lots of
> {code}
> if (isFile && auditLog.isInfoEnabled() && isExternalInvocation()) {
>   logAuditEvent(UserGroupInformation.getCurrentUser(),
> }
> {code}
> that can easily be condensed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3551) WebHDFS CREATE does not use client location for redirection

2012-06-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401680#comment-13401680
 ] 

Suresh Srinivas commented on HDFS-3551:
---

Minor comment - javadoc for TestWebHdfsDataLocality says "Test balander...". 
Also NamenodeProtocols import is not needed.

+1 for the change. It would be good to get this into Release 1.1.
 

> WebHDFS CREATE does not use client location for redirection
> ---
>
> Key: HDFS-3551
> URL: https://issues.apache.org/jira/browse/HDFS-3551
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3551_20120620.patch, h3551_20120625.patch
>
>
> CREATE currently redirects client to a random datanode but not using the 
> client location information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3568) fuse_dfs: add support for security

2012-06-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401673#comment-13401673
 ] 

Colin Patrick McCabe commented on HDFS-3568:


Thanks for pointing HDFS-2546 out to me, Harsh.  It does look related.  
Hopefully we'll be able to come up with a libhdfs API that will work well for 
both.

> fuse_dfs: add support for security
> --
>
> Key: HDFS-3568
> URL: https://issues.apache.org/jira/browse/HDFS-3568
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 1.1.0, 2.0.1-alpha
>
>
> fuse_dfs should have support for Kerberos authentication.  This would allow 
> FUSE to be used in a secure cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3559) DFSTestUtil: use Builder class to construct DFSTestUtil instances

2012-06-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3559:
---

Attachment: HDFS-3559.003.patch

* rebase

> DFSTestUtil: use Builder class to construct DFSTestUtil instances
> -
>
> Key: HDFS-3559
> URL: https://issues.apache.org/jira/browse/HDFS-3559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3559.001.patch, HDFS-3559.002.patch, 
> HDFS-3559.003.patch
>
>
> The number of parameters in DFSTestUtil's constructor has grown over time.  
> It would be nice to have a Builder class similar to MiniDFSClusterBuilder, 
> which could construct an instance of DFSTestUtil.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3510) Improve FSEditLog pre-allocation

2012-06-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401658#comment-13401658
 ] 

Suresh Srinivas commented on HDFS-3510:
---

Could the logic be simplified- you could just write the 
MIN_PREALLOCATION_LENGTH buffer multiple times:
{noformat}
if (need <= 0) {
  return;
}
while(need > 0) {
  fill.position(0);
  IOUtils.writeFully(fc, fill, size);
  need -= fillCapacity;
  size += fillCapacity;
} 
{noformat}


> Improve FSEditLog pre-allocation
> 
>
> Key: HDFS-3510
> URL: https://issues.apache.org/jira/browse/HDFS-3510
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 1.0.0, 2.0.1-alpha
>
> Attachments: HDFS-3510-b1.001.patch, HDFS-3510-b1.002.patch, 
> HDFS-3510.001.patch, HDFS-3510.003.patch, HDFS-3510.004.patch, 
> HDFS-3510.004.patch, HDFS-3510.006.patch, HDFS-3510.007.patch, 
> HDFS-3510.008.patch, HDFS-3510.009.patch, HDFS-3510.010.patch
>
>
> It is good to avoid running out of space in the middle of writing a batch of 
> edits, because when it happens, we often get partial edits at the end of the 
> log.
> Edit log preallocation can solve this problem (see HADOOP-2330 for a full 
> description of edit log preallocation).
> The current pre-allocation code was introduced for performance reasons, not 
> for preventing partial edits.  As a consequence, we sometimes do a write 
> without using pre-allocation.  We should change the pre-allocation code so 
> that it always preallocates at least enough space before writing out the 
> edits.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3569) clean up isInfoEnabled usage re logAuditEvent

2012-06-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401652#comment-13401652
 ] 

Suresh Srinivas commented on HDFS-3569:
---

Can the priority of this be marked minor/trivial?

> clean up isInfoEnabled usage re logAuditEvent 
> --
>
> Key: HDFS-3569
> URL: https://issues.apache.org/jira/browse/HDFS-3569
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>
> From HDFS-3535 we have
> {quote}
> Normally the checks are used before the method invocation if we're doing 
> expensive things to create the args (eg lots of string concatenation) not to 
> save the cost of the method invocation. Doesn't look like that's the case 
> here (we're not constructing args) so we could just call logAuditEvent 
> directly everywhere.
> There are a bunch of uses of logAuditEvent that do need to check if audit 
> logging is enabled before constructing log messages, etc. I considered 
> refactoring them all and concluded that it was out of scope for this change. 
> I decided not to change the existing idiom (verbose though it is) before 
> refactoring all users of the interface, which should be a separate change.
> {quote}
> There are lots of
> {code}
> if (isFile && auditLog.isInfoEnabled() && isExternalInvocation()) {
>   logAuditEvent(UserGroupInformation.getCurrentUser(),
> }
> {code}
> that can easily be condensed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2546) The C HDFS API should work with secure HDFS

2012-06-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401649#comment-13401649
 ] 

Harsh J commented on HDFS-2546:
---

Hi Ambar,

Would you have a chance to push out a patch (even WIP one is fine) soon for 
this? Let us know how we can help.

Thanks!

> The C HDFS API should work with secure HDFS
> ---
>
> Key: HDFS-2546
> URL: https://issues.apache.org/jira/browse/HDFS-2546
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: libhdfs
>Affects Versions: 0.24.0
>Reporter: Harsh J
>
> Right now, the libhdfs will not work with Kerberos Hadoop. In case libhdfs is 
> still being supported, it must fully work with Kerberized instances of HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3535) Audit logging should log denied accesses

2012-06-26 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401641#comment-13401641
 ] 

Andy Isaacson commented on HDFS-3535:
-

{quote}
Why? Doesn't seem like the arg evaluation has side effects or is expensive but 
maybe I'm missing something.
{quote}
{code}
FSNamesystem.java-  final HdfsFileStatus stat = dir.getFileInfo(src, 
false);
FSNamesystem.java:  logAuditEvent(UserGroupInformation.getCurrentUser(),
...
FSNamesystem.java-  final HdfsFileStatus stat = dir.getFileInfo(src, false);
FSNamesystem.java:  logAuditEvent(UserGroupInformation.getCurrentUser(),
...
FSNamesystem.java-  StringBuilder cmd = new StringBuilder("rename 
options=");
FSNamesystem.java-  for (Rename option : options) {
FSNamesystem.java-cmd.append(option.value()).append(" ");
FSNamesystem.java-  }
FSNamesystem.java:  logAuditEvent(UserGroupInformation.getCurrentUser(), 
Server.getRemote
{code}

{quote}
Agree this cleanup should be a separate change, file a jira?
{quote}
Sure, filed HDFS-3569.

> Audit logging should log denied accesses
> 
>
> Key: HDFS-3535
> URL: https://issues.apache.org/jira/browse/HDFS-3535
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.1-alpha
>
> Attachments: hdfs-3535-1.txt, hdfs-3535-2.txt, hdfs-3535.txt
>
>
> FSNamesystem.java logs an audit log entry when a user successfully accesses 
> the filesystem:
> {code}
>   logAuditEvent(UserGroupInformation.getLoginUser(),
> Server.getRemoteIp(),
> "concat", Arrays.toString(srcs), target, resultingStat);
> {code}
> but there is no similar log when a user attempts to access the filesystem and 
> is denied due to permissions.  Competing systems do provide such logging of 
> denied access attempts; we should too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3569) clean up isInfoEnabled usage re logAuditEvent

2012-06-26 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson reassigned HDFS-3569:
---

Assignee: Andy Isaacson

> clean up isInfoEnabled usage re logAuditEvent 
> --
>
> Key: HDFS-3569
> URL: https://issues.apache.org/jira/browse/HDFS-3569
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>
> From HDFS-3535 we have
> {quote}
> Normally the checks are used before the method invocation if we're doing 
> expensive things to create the args (eg lots of string concatenation) not to 
> save the cost of the method invocation. Doesn't look like that's the case 
> here (we're not constructing args) so we could just call logAuditEvent 
> directly everywhere.
> There are a bunch of uses of logAuditEvent that do need to check if audit 
> logging is enabled before constructing log messages, etc. I considered 
> refactoring them all and concluded that it was out of scope for this change. 
> I decided not to change the existing idiom (verbose though it is) before 
> refactoring all users of the interface, which should be a separate change.
> {quote}
> There are lots of
> {code}
> if (isFile && auditLog.isInfoEnabled() && isExternalInvocation()) {
>   logAuditEvent(UserGroupInformation.getCurrentUser(),
> }
> {code}
> that can easily be condensed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3568) fuse_dfs: add support for security

2012-06-26 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401639#comment-13401639
 ] 

Todd Lipcon commented on HDFS-3568:
---

I'd think HDFS-2546 is probably a pre-req of this work, but not a duplicate. 
fuse-dfs is a user of libhdfs

> fuse_dfs: add support for security
> --
>
> Key: HDFS-3568
> URL: https://issues.apache.org/jira/browse/HDFS-3568
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 1.1.0, 2.0.1-alpha
>
>
> fuse_dfs should have support for Kerberos authentication.  This would allow 
> FUSE to be used in a secure cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3569) clean up isInfoEnabled usage re logAuditEvent

2012-06-26 Thread Andy Isaacson (JIRA)
Andy Isaacson created HDFS-3569:
---

 Summary: clean up isInfoEnabled usage re logAuditEvent 
 Key: HDFS-3569
 URL: https://issues.apache.org/jira/browse/HDFS-3569
 Project: Hadoop HDFS
  Issue Type: Task
Affects Versions: 2.0.0-alpha
Reporter: Andy Isaacson


>From HDFS-3535 we have
{quote}
Normally the checks are used before the method invocation if we're doing 
expensive things to create the args (eg lots of string concatenation) not to 
save the cost of the method invocation. Doesn't look like that's the case here 
(we're not constructing args) so we could just call logAuditEvent directly 
everywhere.

There are a bunch of uses of logAuditEvent that do need to check if audit 
logging is enabled before constructing log messages, etc. I considered 
refactoring them all and concluded that it was out of scope for this change. I 
decided not to change the existing idiom (verbose though it is) before 
refactoring all users of the interface, which should be a separate change.
{quote}
There are lots of
{code}
if (isFile && auditLog.isInfoEnabled() && isExternalInvocation()) {
  logAuditEvent(UserGroupInformation.getCurrentUser(),
}
{code}
that can easily be condensed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3535) Audit logging should log denied accesses

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401635#comment-13401635
 ] 

Hudson commented on HDFS-3535:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2409 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2409/])
HDFS-3535. Audit logging should log denied accesses. Contributed by Andy 
Isaacson (Revision 1354144)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1354144
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> Audit logging should log denied accesses
> 
>
> Key: HDFS-3535
> URL: https://issues.apache.org/jira/browse/HDFS-3535
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.1-alpha
>
> Attachments: hdfs-3535-1.txt, hdfs-3535-2.txt, hdfs-3535.txt
>
>
> FSNamesystem.java logs an audit log entry when a user successfully accesses 
> the filesystem:
> {code}
>   logAuditEvent(UserGroupInformation.getLoginUser(),
> Server.getRemoteIp(),
> "concat", Arrays.toString(srcs), target, resultingStat);
> {code}
> but there is no similar log when a user attempts to access the filesystem and 
> is denied due to permissions.  Competing systems do provide such logging of 
> denied access attempts; we should too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3559) DFSTestUtil: use Builder class to construct DFSTestUtil instances

2012-06-26 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401605#comment-13401605
 ] 

Aaron T. Myers commented on HDFS-3559:
--

Whoops, looks like this patch no longer compiles since HDFS-3559 went in. Mind 
uploading a new patch?

> DFSTestUtil: use Builder class to construct DFSTestUtil instances
> -
>
> Key: HDFS-3559
> URL: https://issues.apache.org/jira/browse/HDFS-3559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3559.001.patch, HDFS-3559.002.patch
>
>
> The number of parameters in DFSTestUtil's constructor has grown over time.  
> It would be nice to have a Builder class similar to MiniDFSClusterBuilder, 
> which could construct an instance of DFSTestUtil.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3559) DFSTestUtil: use Builder class to construct DFSTestUtil instances

2012-06-26 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401602#comment-13401602
 ] 

Aaron T. Myers commented on HDFS-3559:
--

Thanks for checking on this.

+1, I'm going to commit this momentarily.

> DFSTestUtil: use Builder class to construct DFSTestUtil instances
> -
>
> Key: HDFS-3559
> URL: https://issues.apache.org/jira/browse/HDFS-3559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3559.001.patch, HDFS-3559.002.patch
>
>
> The number of parameters in DFSTestUtil's constructor has grown over time.  
> It would be nice to have a Builder class similar to MiniDFSClusterBuilder, 
> which could construct an instance of DFSTestUtil.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3491) HttpFs does not set permissions correctly

2012-06-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401599#comment-13401599
 ] 

Alejandro Abdelnur commented on HDFS-3491:
--

The bug was not exposed when using a client FileSystem implementation but when 
using the REST API directly. The client FileSystem did not see the issue as the 
string conversion of 1777 is not adding the '0' in front of it.

> HttpFs does not set permissions correctly
> -
>
> Key: HDFS-3491
> URL: https://issues.apache.org/jira/browse/HDFS-3491
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Romain Rigaux
>Assignee: Alejandro Abdelnur
> Attachments: HDFS-3491.patch, HDFS-3491.patch
>
>
> HttpFs seems to have these problems:
> # can't set permissions to 777 at file creation or 1777 with setpermission
> # does not accept 01777 permissions (which is valid in WebHdfs)
> WebHdfs
> curl -X PUT 
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?permission=1777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> curl  
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1338581075040,"owner":"hue","pathSuffix":"","permission":"1777","replication":0,"type":"DIRECTORY"}}
> curl -X PUT 
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?permission=01777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> HttpFs
> curl -X PUT 
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?permission=1777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> curl  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"hue","group":"supergroup","permission":"755","accessTime":0,"modificationTime":1338580912205,"blockSize":0,"replication":0}}
> curl -X PUT  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=SETPERMISSION&PERMISSION=1777&user.name=hue&doas=hue";
> curl  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"hue","group":"supergroup","permission":"777","accessTime":0,"modificationTime":1338581075040,"blockSize":0,"replication":0}}
> curl -X PUT 
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?permission=01777&op=MKDIRS&user.name=hue&doas=hue";
> {"RemoteException":{"message":"java.lang.IllegalArgumentException: Parameter 
> [permission], invalid value [01777], value must be 
> [default|[0-1]?[0-7][0-7][0-7]]","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3559) DFSTestUtil: use Builder class to construct DFSTestUtil instances

2012-06-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401600#comment-13401600
 ] 

Colin Patrick McCabe commented on HDFS-3559:


the javadoc thing comes from here:

{code}
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java:651:
 warning - @param argument "jobid" is not a parameter name.
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java:669:
 warning - @param argument "jobid" is not a parameter name.
{code}

which in turn is from change 1353757 it seems.  So not this patch.

> DFSTestUtil: use Builder class to construct DFSTestUtil instances
> -
>
> Key: HDFS-3559
> URL: https://issues.apache.org/jira/browse/HDFS-3559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3559.001.patch, HDFS-3559.002.patch
>
>
> The number of parameters in DFSTestUtil's constructor has grown over time.  
> It would be nice to have a Builder class similar to MiniDFSClusterBuilder, 
> which could construct an instance of DFSTestUtil.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3564) Make the replication policy pluggable to allow custom replication policies

2012-06-26 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401595#comment-13401595
 ] 

Harsh J commented on HDFS-3564:
---

We've already made replication policies pluggable via an experimental API. See 
https://issues.apache.org/jira/browse/HDFS-385. This is available in the 2.0.x, 
0.23.x and 0.22.x releases already today. If that suffices, please close this 
out as a dupe?

> Make the replication policy pluggable to allow custom replication policies
> --
>
> Key: HDFS-3564
> URL: https://issues.apache.org/jira/browse/HDFS-3564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Sumadhur Reddy Bolli
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> ReplicationTargetChooser currently determines the placement of replicas in 
> hadoop. Making the replication policy pluggable would help in having custom 
> replication policies that suit the environment. 
> Eg1: Enabling placing replicas across different datacenters(not just racks)
> Eg2: Enabling placing replicas across multiple(more than 2) racks
> Eg3: Cloud environments like azure have logical concepts like fault and 
> upgrade domains. Each fault domain spans multiple upgrade domains and each 
> upgrade domain spans multiple fault domains. Machines are spread typically 
> evenly across both fault and upgrade domains. Fault domain failures are 
> typically catastrophic/unplanned failures and data loss possibility is high. 
> An upgrade domain can be taken down by azure for maintenance periodically. 
> Each time an upgrade domain is taken down a small percentage of machines in 
> the upgrade domain(typically 1-2%) are replaced due to disk failures, thus 
> losing data. Assuming the default replication factor 3, any 3 data nodes 
> going down at the same time would mean potential data loss. So, it is 
> important to have a policy that spreads replicas across both fault and 
> upgrade domains to ensure practically no data loss. The problem here is two 
> dimensional and the default policy in hadoop is one-dimensional. Custom 
> policies to address issues like these can be written if we make the policy 
> pluggable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3481) Refactor HttpFS handling of JAX-RS query string parameters

2012-06-26 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401594#comment-13401594
 ] 

Alejandro Abdelnur commented on HDFS-3481:
--

On the warning:

Offending source:

{code}
...
  private static final Map>[]> PARAMS_DEF =
new HashMap>[]>();

  static {
PARAMS_DEF.put(Operation.OPEN,
  new Class[]{DoAsParam.class, OffsetParam.class, LenParam.class});
...
{code}

{code}
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java:[48,6]
 [unchecked] unchecked conversion
found   : java.lang.Class[]
required: java.lang.Class>[]
{code}

Regarding the duplication of Parameter class, I think the reason for the 
duplication is that Webhdfs is tightly coupled with HDFS code (within the same 
maven module) while HttpFS is decoupled and could (in theory) be used without 
HDFS itself in the classpath. As part of HDFS-2645 all this dup code would go 
away.


> Refactor HttpFS handling of JAX-RS query string parameters
> --
>
> Key: HDFS-3481
> URL: https://issues.apache.org/jira/browse/HDFS-3481
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.1-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3481.patch, HDFS-3481.patch, HDFS-3481.patch
>
>
> Explicit parameters in the HttpFSServer became quite messy as they are the 
> union of all possible parameters for all operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3535) Audit logging should log denied accesses

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401596#comment-13401596
 ] 

Hudson commented on HDFS-3535:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2459 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2459/])
HDFS-3535. Audit logging should log denied accesses. Contributed by Andy 
Isaacson (Revision 1354144)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1354144
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> Audit logging should log denied accesses
> 
>
> Key: HDFS-3535
> URL: https://issues.apache.org/jira/browse/HDFS-3535
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.1-alpha
>
> Attachments: hdfs-3535-1.txt, hdfs-3535-2.txt, hdfs-3535.txt
>
>
> FSNamesystem.java logs an audit log entry when a user successfully accesses 
> the filesystem:
> {code}
>   logAuditEvent(UserGroupInformation.getLoginUser(),
> Server.getRemoteIp(),
> "concat", Arrays.toString(srcs), target, resultingStat);
> {code}
> but there is no similar log when a user attempts to access the filesystem and 
> is denied due to permissions.  Competing systems do provide such logging of 
> denied access attempts; we should too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3537) put libhdfs source files in a directory named libhdfs

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401588#comment-13401588
 ] 

Eli Collins commented on HDFS-3537:
---

Hey Nicholas,

The idea is that fuse-dfs is moving to src/main/native so it's part of the 
regular native build structure (though still only built when a flag is passed), 
we don't want to mix libhdfs and fuse-dfs (and any other native code we add) in 
the same directory. We used to have a "libhdfs" dir where all the libhdfs code 
lived, this is just reintroducing that, ie we should do this even if we don't 
plan to add more native code as it makes sense for libhdfs code to live in a 
directory called libhdfs.

Thanks,
Eli

> put libhdfs source files in a directory named libhdfs
> -
>
> Key: HDFS-3537
> URL: https://issues.apache.org/jira/browse/HDFS-3537
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3537.001.patch
>
>
> Move libhdfs source files from main/native to main/native/libhdfs.  Rename 
> hdfs_read to libhdfs_test_read; rename hdfs_write to libhdfs_test_write.
> The rationale is that we'd like to add some other stuff under main/native 
> (like fuse_dfs) and it's nice to have separate things in separate directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3535) Audit logging should log denied accesses

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401587#comment-13401587
 ] 

Hudson commented on HDFS-3535:
--

Integrated in Hadoop-Common-trunk-Commit #2390 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2390/])
HDFS-3535. Audit logging should log denied accesses. Contributed by Andy 
Isaacson (Revision 1354144)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1354144
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


> Audit logging should log denied accesses
> 
>
> Key: HDFS-3535
> URL: https://issues.apache.org/jira/browse/HDFS-3535
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.1-alpha
>
> Attachments: hdfs-3535-1.txt, hdfs-3535-2.txt, hdfs-3535.txt
>
>
> FSNamesystem.java logs an audit log entry when a user successfully accesses 
> the filesystem:
> {code}
>   logAuditEvent(UserGroupInformation.getLoginUser(),
> Server.getRemoteIp(),
> "concat", Arrays.toString(srcs), target, resultingStat);
> {code}
> but there is no similar log when a user attempts to access the filesystem and 
> is denied due to permissions.  Competing systems do provide such logging of 
> denied access attempts; we should too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-06-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401581#comment-13401581
 ] 

Allen Wittenauer commented on HDFS-2617:


bq. Why doesn't WebHDFS work? It is supported in branch-1.

*echo* *echo* *echo*

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
> HDFS-2617-config.patch, HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, 
> HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, hdfs-2617-1.1.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3535) Audit logging should log denied accesses

2012-06-26 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3535:
--

Summary: Audit logging should log denied accesses  (was: audit logging 
should log denied accesses as well as permitted ones)

> Audit logging should log denied accesses
> 
>
> Key: HDFS-3535
> URL: https://issues.apache.org/jira/browse/HDFS-3535
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.1-alpha
>
> Attachments: hdfs-3535-1.txt, hdfs-3535-2.txt, hdfs-3535.txt
>
>
> FSNamesystem.java logs an audit log entry when a user successfully accesses 
> the filesystem:
> {code}
>   logAuditEvent(UserGroupInformation.getLoginUser(),
> Server.getRemoteIp(),
> "concat", Arrays.toString(srcs), target, resultingStat);
> {code}
> but there is no similar log when a user attempts to access the filesystem and 
> is denied due to permissions.  Competing systems do provide such logging of 
> denied access attempts; we should too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3535) audit logging should log denied accesses as well as permitted ones

2012-06-26 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3535:
--

  Resolution: Fixed
   Fix Version/s: 2.0.1-alpha
Target Version/s:   (was: 2.0.1-alpha)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks Andy!

> audit logging should log denied accesses as well as permitted ones
> --
>
> Key: HDFS-3535
> URL: https://issues.apache.org/jira/browse/HDFS-3535
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.0.1-alpha
>
> Attachments: hdfs-3535-1.txt, hdfs-3535-2.txt, hdfs-3535.txt
>
>
> FSNamesystem.java logs an audit log entry when a user successfully accesses 
> the filesystem:
> {code}
>   logAuditEvent(UserGroupInformation.getLoginUser(),
> Server.getRemoteIp(),
> "concat", Arrays.toString(srcs), target, resultingStat);
> {code}
> but there is no similar log when a user attempts to access the filesystem and 
> is denied due to permissions.  Competing systems do provide such logging of 
> denied access attempts; we should too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3535) audit logging should log denied accesses as well as permitted ones

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401575#comment-13401575
 ] 

Eli Collins commented on HDFS-3535:
---

+1 latest patch looks good

bq. There are a bunch of uses of logAuditEvent that do need to check if audit 
logging is enabled before constructing log messages. 

Why? Doesn't seem like the arg evaluation has side effects or is expensive but 
maybe I'm missing something. Agree this cleanup should be a separate change, 
file a jira?

> audit logging should log denied accesses as well as permitted ones
> --
>
> Key: HDFS-3535
> URL: https://issues.apache.org/jira/browse/HDFS-3535
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hdfs-3535-1.txt, hdfs-3535-2.txt, hdfs-3535.txt
>
>
> FSNamesystem.java logs an audit log entry when a user successfully accesses 
> the filesystem:
> {code}
>   logAuditEvent(UserGroupInformation.getLoginUser(),
> Server.getRemoteIp(),
> "concat", Arrays.toString(srcs), target, resultingStat);
> {code}
> but there is no similar log when a user attempts to access the filesystem and 
> is denied due to permissions.  Competing systems do provide such logging of 
> denied access attempts; we should too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3491) HttpFs does not set permissions correctly

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401568#comment-13401568
 ] 

Eli Collins commented on HDFS-3491:
---

The TestParam addition is good but what test covers that the values passed to 
the "permission" parameter get reflected when accessing the same file via 
FileSystem?

> HttpFs does not set permissions correctly
> -
>
> Key: HDFS-3491
> URL: https://issues.apache.org/jira/browse/HDFS-3491
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Romain Rigaux
>Assignee: Alejandro Abdelnur
> Attachments: HDFS-3491.patch, HDFS-3491.patch
>
>
> HttpFs seems to have these problems:
> # can't set permissions to 777 at file creation or 1777 with setpermission
> # does not accept 01777 permissions (which is valid in WebHdfs)
> WebHdfs
> curl -X PUT 
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?permission=1777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> curl  
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"accessTime":0,"blockSize":0,"group":"supergroup","length":0,"modificationTime":1338581075040,"owner":"hue","pathSuffix":"","permission":"1777","replication":0,"type":"DIRECTORY"}}
> curl -X PUT 
> "http://localhost:50070/webhdfs/v1/tmp/test-perm-webhdfs?permission=01777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> HttpFs
> curl -X PUT 
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?permission=1777&op=MKDIRS&user.name=hue&doas=hue";
> {"boolean":true}
> curl  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"hue","group":"supergroup","permission":"755","accessTime":0,"modificationTime":1338580912205,"blockSize":0,"replication":0}}
> curl -X PUT  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=SETPERMISSION&PERMISSION=1777&user.name=hue&doas=hue";
> curl  
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?op=GETFILESTATUS&user.name=hue&doas=hue";
> {"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"hue","group":"supergroup","permission":"777","accessTime":0,"modificationTime":1338581075040,"blockSize":0,"replication":0}}
> curl -X PUT 
> "http://localhost:14000/webhdfs/v1/tmp/test-perm-httpfs?permission=01777&op=MKDIRS&user.name=hue&doas=hue";
> {"RemoteException":{"message":"java.lang.IllegalArgumentException: Parameter 
> [permission], invalid value [01777], value must be 
> [default|[0-1]?[0-7][0-7][0-7]]","exception":"QueryParamException","javaClassName":"com.sun.jersey.api.ParamException$QueryParamException"}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3568) fuse_dfs: add support for security

2012-06-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401567#comment-13401567
 ] 

Colin Patrick McCabe commented on HDFS-3568:


If it's run as root, fuse_dfs can get access to kerberos ticket cache file of 
the user performing a FUSE operation.  Then FUSE can create a FileSystem 
instance with this kerberos ticket cache.

In the future, it would also be good to use privilege separation to contain the 
power of a fuse_dfs instance running as root.

> fuse_dfs: add support for security
> --
>
> Key: HDFS-3568
> URL: https://issues.apache.org/jira/browse/HDFS-3568
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 1.1.0, 2.0.1-alpha
>
>
> fuse_dfs should have support for Kerberos authentication.  This would allow 
> FUSE to be used in a secure cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3481) Refactor HttpFS handling of JAX-RS query string parameters

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401565#comment-13401565
 ] 

Eli Collins commented on HDFS-3481:
---

What's the warning?

Makes sense wrt the Parameter class. I don't follow the logic wrt having two 
Param classes, ie we can start sharing code before we're 100% functionally 
equivalent. Not a blocker for this issue since we already have a duplicated 
Param class.

> Refactor HttpFS handling of JAX-RS query string parameters
> --
>
> Key: HDFS-3481
> URL: https://issues.apache.org/jira/browse/HDFS-3481
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.1-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3481.patch, HDFS-3481.patch, HDFS-3481.patch
>
>
> Explicit parameters in the HttpFSServer became quite messy as they are the 
> union of all possible parameters for all operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3568) fuse_dfs: add support for security

2012-06-26 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-3568:
--

 Summary: fuse_dfs: add support for security
 Key: HDFS-3568
 URL: https://issues.apache.org/jira/browse/HDFS-3568
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha, 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.1.0, 2.0.1-alpha


fuse_dfs should have support for Kerberos authentication.  This would allow 
FUSE to be used in a secure cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2617) Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution

2012-06-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401558#comment-13401558
 ] 

Eli Collins commented on HDFS-2617:
---

Hey Owen,

Looks like the patch removes KSSL support entirely, will break existing 1.x 
users. How about adding a config option so the new SPNEGO-based solution can be 
enabled via a config?

Thanks,
Eli

> Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution
> --
>
> Key: HDFS-2617
> URL: https://issues.apache.org/jira/browse/HDFS-2617
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-2617-a.patch, HDFS-2617-b.patch, 
> HDFS-2617-config.patch, HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, 
> HDFS-2617-trunk.patch, HDFS-2617-trunk.patch, hdfs-2617-1.1.patch
>
>
> The current approach to secure and authenticate nn web services is based on 
> Kerberized SSL and was developed when a SPNEGO solution wasn't available. Now 
> that we have one, we can get rid of the non-standard KSSL and use SPNEGO 
> throughout.  This will simplify setup and configuration.  Also, Kerberized 
> SSL is a non-standard approach with its own quirks and dark corners 
> (HDFS-2386).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3563) Fix findbug warnings in raid

2012-06-26 Thread Weiyan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401536#comment-13401536
 ] 

Weiyan Wang commented on HDFS-3563:
---

sure. 

> Fix findbug warnings in raid
> 
>
> Key: HDFS-3563
> URL: https://issues.apache.org/jira/browse/HDFS-3563
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: contrib/raid
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Weiyan Wang
>
> MAPREDUCE-3868 re-enabled raid but introduced 31 new findbugs warnings.  
> Those warnings should be fixed or appropriate items placed in an exclude file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3567) Provide a way to enforce clearing of trash data immediately

2012-06-26 Thread Harsh J (JIRA)
Harsh J created HDFS-3567:
-

 Summary: Provide a way to enforce clearing of trash data 
immediately
 Key: HDFS-3567
 URL: https://issues.apache.org/jira/browse/HDFS-3567
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 3.0.0
Reporter: Harsh J
Priority: Minor


As discussed at http://search-hadoop.com/m/r1lMa13eN7O, it would be good to 
have a dfsadmin sub-command (or similar) that admins can use to enforce a trash 
emptier option from the NameNode, instead of waiting for the trash clearance 
interval to pass. Can come handy when attempting to quickly delete away data in 
a filling up cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3516) Check content-type in WebHdfsFileSystem

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401414#comment-13401414
 ] 

Hudson commented on HDFS-3516:
--

Integrated in Hadoop-Mapreduce-trunk #1121 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1121/])
HDFS-3516. Check content-type in WebHdfsFileSystem. (Revision 1353800)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353800
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


> Check content-type in WebHdfsFileSystem
> ---
>
> Key: HDFS-3516
> URL: https://issues.apache.org/jira/browse/HDFS-3516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 1.1.0, 2.0.1-alpha
>
> Attachments: h3516_20120607.patch, h3516_20120608.patch, 
> h3516_20120609.patch, h3516_20120609_b-1.patch
>
>
> WebHdfsFileSystem currently tries to parse the response as json.  It may be a 
> good idea to check the content-type before parsing it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3549) dist tar build fails in hadoop-hdfs-raid project

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401416#comment-13401416
 ] 

Hudson commented on HDFS-3549:
--

Integrated in Hadoop-Mapreduce-trunk #1121 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1121/])
HDFS-3549. Fix dist tar build fails in hadoop-hdfs-raid project. (Jason 
Lowe via daryn) (Revision 1353695)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353695
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> dist tar build fails in hadoop-hdfs-raid project
> 
>
> Key: HDFS-3549
> URL: https://issues.apache.org/jira/browse/HDFS-3549
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3549.patch, HDFS-3549.patch, HDFS-3549.patch, 
> HDFS-3549.patch
>
>
> Trying to build the distribution tarball in a clean tree via {{mvn install 
> -Pdist -Dtar -DskipTests -Dmaven.javadoc.skip}} fails with this error:
> {noformat}
> main:
>  [exec] tar: hadoop-hdfs-raid-3.0.0-SNAPSHOT: Cannot stat: No such file 
> or directory
>  [exec] tar: Exiting with failure status due to previous errors
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3498) Make Replica Removal Policy pluggable and ReplicaPlacementPolicyDefault extensible for reusing code in subclass

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401415#comment-13401415
 ] 

Hudson commented on HDFS-3498:
--

Integrated in Hadoop-Mapreduce-trunk #1121 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1121/])
HDFS-3498. Support replica removal in BlockPlacementPolicy and make 
BlockPlacementPolicyDefault extensible for reusing code in subclasses.  
Contributed by Junping Du (Revision 1353807)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353807
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


> Make Replica Removal Policy pluggable and ReplicaPlacementPolicyDefault 
> extensible for reusing code in subclass
> ---
>
> Key: HDFS-3498
> URL: https://issues.apache.org/jira/browse/HDFS-3498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-3498-v2.patch, HDFS-3498-v3.patch, 
> HDFS-3498-v4.patch, HDFS-3498-v5.patch, HDFS-3498.patch, 
> Hadoop-8471-BlockPlacementDefault-extensible.patch
>
>
> ReplicaPlacementPolicy is already a pluggable component in Hadoop. However, 
> the Replica Removal Policy is still nested in BlockManager that need to be 
> separated out into a ReplicaPlacementPolicy then can be override later. Also 
> it looks like hadoop unit test lack the testing on replica removal policy, so 
> we add it here.
> On the other hand, as a implementation of ReplicaPlacementPolicy, 
> ReplicaPlacementDefault still show lots of generic for other topology cases 
> like: virtualization, and we want to make code in 
> ReplicaPlacementPolicyDefault can be reused as much as possible so a few of 
> its methods were changed from private to protected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3550) raid added javadoc warnings

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401413#comment-13401413
 ] 

Hudson commented on HDFS-3550:
--

Integrated in Hadoop-Mapreduce-trunk #1121 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1121/])
HDFS-3550. Fix raid javadoc warnings. (Jason Lowe via daryn) (Revision 
1353592)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353592
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Decoder.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Encoder.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> raid added javadoc warnings
> ---
>
> Key: HDFS-3550
> URL: https://issues.apache.org/jira/browse/HDFS-3550
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Thomas Graves
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3550.patch
>
>
> hdfs raid which I believe was introduced by MAPREDUCE-3868 has added the 
> following javadoc warnings and now all the builds complain about them:
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Decoder.java:180:
>  warning - @param argument "parityFile" is not a parameter name.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Encoder.java:340:
>  warning - @param argument "srcFile" is not a parameter name.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java:24:
>  warning - Tag @link: reference not found: CronNode
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java:24:
>  warning - Tag @link: reference not found: CronNode
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java:24:
>  warning - Tag @link: reference not found: CronNode
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://w

[jira] [Updated] (HDFS-3553) Hftp proxy tokens are broken

2012-06-26 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3553:
--

Status: Open  (was: Patch Available)

Canceling patch since problems runs deeper.  Hftp isn't correctly locating a 
TGT within a doAs.

> Hftp proxy tokens are broken
> 
>
> Key: HDFS-3553
> URL: https://issues.apache.org/jira/browse/HDFS-3553
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 1.0.2, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HDFS-3553.branch-1.0.patch
>
>
> Proxy tokens are broken for hftp.  The impact is systems using proxy tokens, 
> such as oozie jobs, cannot use hftp.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3550) raid added javadoc warnings

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401313#comment-13401313
 ] 

Hudson commented on HDFS-3550:
--

Integrated in Hadoop-Hdfs-trunk #1088 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1088/])
HDFS-3550. Fix raid javadoc warnings. (Jason Lowe via daryn) (Revision 
1353592)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353592
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Decoder.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Encoder.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> raid added javadoc warnings
> ---
>
> Key: HDFS-3550
> URL: https://issues.apache.org/jira/browse/HDFS-3550
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Thomas Graves
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3550.patch
>
>
> hdfs raid which I believe was introduced by MAPREDUCE-3868 has added the 
> following javadoc warnings and now all the builds complain about them:
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Decoder.java:180:
>  warning - @param argument "parityFile" is not a parameter name.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/Encoder.java:340:
>  warning - @param argument "srcFile" is not a parameter name.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java:24:
>  warning - Tag @link: reference not found: CronNode
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java:24:
>  warning - Tag @link: reference not found: CronNode
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:58:
>  warning - @inheritDocs is an unknown tag.
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/RaidConfigurationException.java:24:
>  warning - Tag @link: reference not found: CronNode
> [WARNING] 
> /home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/src/main/java/org/apache/hadoop/raid/DistRaidNode.java:71:
>  warning - @inheritDocs is an unknown tag.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassi

[jira] [Commented] (HDFS-3498) Make Replica Removal Policy pluggable and ReplicaPlacementPolicyDefault extensible for reusing code in subclass

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401315#comment-13401315
 ] 

Hudson commented on HDFS-3498:
--

Integrated in Hadoop-Hdfs-trunk #1088 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1088/])
HDFS-3498. Support replica removal in BlockPlacementPolicy and make 
BlockPlacementPolicyDefault extensible for reusing code in subclasses.  
Contributed by Junping Du (Revision 1353807)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353807
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


> Make Replica Removal Policy pluggable and ReplicaPlacementPolicyDefault 
> extensible for reusing code in subclass
> ---
>
> Key: HDFS-3498
> URL: https://issues.apache.org/jira/browse/HDFS-3498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: HDFS-3498-v2.patch, HDFS-3498-v3.patch, 
> HDFS-3498-v4.patch, HDFS-3498-v5.patch, HDFS-3498.patch, 
> Hadoop-8471-BlockPlacementDefault-extensible.patch
>
>
> ReplicaPlacementPolicy is already a pluggable component in Hadoop. However, 
> the Replica Removal Policy is still nested in BlockManager that need to be 
> separated out into a ReplicaPlacementPolicy then can be override later. Also 
> it looks like hadoop unit test lack the testing on replica removal policy, so 
> we add it here.
> On the other hand, as a implementation of ReplicaPlacementPolicy, 
> ReplicaPlacementDefault still show lots of generic for other topology cases 
> like: virtualization, and we want to make code in 
> ReplicaPlacementPolicyDefault can be reused as much as possible so a few of 
> its methods were changed from private to protected.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3549) dist tar build fails in hadoop-hdfs-raid project

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401316#comment-13401316
 ] 

Hudson commented on HDFS-3549:
--

Integrated in Hadoop-Hdfs-trunk #1088 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1088/])
HDFS-3549. Fix dist tar build fails in hadoop-hdfs-raid project. (Jason 
Lowe via daryn) (Revision 1353695)

 Result = FAILURE
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353695
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-raid/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> dist tar build fails in hadoop-hdfs-raid project
> 
>
> Key: HDFS-3549
> URL: https://issues.apache.org/jira/browse/HDFS-3549
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HDFS-3549.patch, HDFS-3549.patch, HDFS-3549.patch, 
> HDFS-3549.patch
>
>
> Trying to build the distribution tarball in a clean tree via {{mvn install 
> -Pdist -Dtar -DskipTests -Dmaven.javadoc.skip}} fails with this error:
> {noformat}
> main:
>  [exec] tar: hadoop-hdfs-raid-3.0.0-SNAPSHOT: Cannot stat: No such file 
> or directory
>  [exec] tar: Exiting with failure status due to previous errors
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3516) Check content-type in WebHdfsFileSystem

2012-06-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401314#comment-13401314
 ] 

Hudson commented on HDFS-3516:
--

Integrated in Hadoop-Hdfs-trunk #1088 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1088/])
HDFS-3516. Check content-type in WebHdfsFileSystem. (Revision 1353800)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1353800
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/WebHdfsTestUtil.java


> Check content-type in WebHdfsFileSystem
> ---
>
> Key: HDFS-3516
> URL: https://issues.apache.org/jira/browse/HDFS-3516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 1.1.0, 2.0.1-alpha
>
> Attachments: h3516_20120607.patch, h3516_20120608.patch, 
> h3516_20120609.patch, h3516_20120609_b-1.patch
>
>
> WebHdfsFileSystem currently tries to parse the response as json.  It may be a 
> good idea to check the content-type before parsing it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3551) WebHDFS CREATE does not use client location for redirection

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401292#comment-13401292
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3551:
--

The test failure and javadoc warnings are not related to the patch.

> WebHDFS CREATE does not use client location for redirection
> ---
>
> Key: HDFS-3551
> URL: https://issues.apache.org/jira/browse/HDFS-3551
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3551_20120620.patch, h3551_20120625.patch
>
>
> CREATE currently redirects client to a random datanode but not using the 
> client location information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3516) Check content-type in WebHdfsFileSystem

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3516:
-

Fix Version/s: 1.1.0

Committed also to branch-1 and branch-1.1.

> Check content-type in WebHdfsFileSystem
> ---
>
> Key: HDFS-3516
> URL: https://issues.apache.org/jira/browse/HDFS-3516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 1.1.0, 2.0.1-alpha
>
> Attachments: h3516_20120607.patch, h3516_20120608.patch, 
> h3516_20120609.patch, h3516_20120609_b-1.patch
>
>
> WebHdfsFileSystem currently tries to parse the response as json.  It may be a 
> good idea to check the content-type before parsing it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3516) Check content-type in WebHdfsFileSystem

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3516:
-

Attachment: h3516_20120609_b-1.patch

h3516_20120609_b-1.patch: for branch-1.

> Check content-type in WebHdfsFileSystem
> ---
>
> Key: HDFS-3516
> URL: https://issues.apache.org/jira/browse/HDFS-3516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 2.0.1-alpha
>
> Attachments: h3516_20120607.patch, h3516_20120608.patch, 
> h3516_20120609.patch, h3516_20120609_b-1.patch
>
>
> WebHdfsFileSystem currently tries to parse the response as json.  It may be a 
> good idea to check the content-type before parsing it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3551) WebHDFS CREATE does not use client location for redirection

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401280#comment-13401280
 ] 

Hadoop QA commented on HDFS-3551:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533437/h3551_20120625.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2701//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2701//console

This message is automatically generated.

> WebHDFS CREATE does not use client location for redirection
> ---
>
> Key: HDFS-3551
> URL: https://issues.apache.org/jira/browse/HDFS-3551
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3551_20120620.patch, h3551_20120625.patch
>
>
> CREATE currently redirects client to a random datanode but not using the 
> client location information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3504) Configurable retry in DFSClient

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3504:
-

Fix Version/s: 1.1.0

> Configurable retry in DFSClient
> ---
>
> Key: HDFS-3504
> URL: https://issues.apache.org/jira/browse/HDFS-3504
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Siddharth Seth
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 1.1.0, 2.0.1-alpha
>
> Attachments: h3504_20120607.patch, h3504_20120608.patch, 
> h3504_20120611.patch, h3504_20120611_b-1.0.patch
>
>
> When NN maintenance is performed on a large cluster, jobs end up failing. 
> This is particularly bad for long running jobs. The client retry policy could 
> be made configurable so that jobs don't need to be restarted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3504) Configurable retry in DFSClient

2012-06-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401271#comment-13401271
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3504:
--

The b-1.0 patch also applies to branch-1.  Will commit it.

> Configurable retry in DFSClient
> ---
>
> Key: HDFS-3504
> URL: https://issues.apache.org/jira/browse/HDFS-3504
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 1.0.0, 2.0.0-alpha
>Reporter: Siddharth Seth
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 2.0.1-alpha
>
> Attachments: h3504_20120607.patch, h3504_20120608.patch, 
> h3504_20120611.patch, h3504_20120611_b-1.0.patch
>
>
> When NN maintenance is performed on a large cluster, jobs end up failing. 
> This is particularly bad for long running jobs. The client retry policy could 
> be made configurable so that jobs don't need to be restarted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3559) DFSTestUtil: use Builder class to construct DFSTestUtil instances

2012-06-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401263#comment-13401263
 ] 

Hadoop QA commented on HDFS-3559:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12533440/HDFS-3559.002.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 8 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2702//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2702//console

This message is automatically generated.

> DFSTestUtil: use Builder class to construct DFSTestUtil instances
> -
>
> Key: HDFS-3559
> URL: https://issues.apache.org/jira/browse/HDFS-3559
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3559.001.patch, HDFS-3559.002.patch
>
>
> The number of parameters in DFSTestUtil's constructor has grown over time.  
> It would be nice to have a Builder class similar to MiniDFSClusterBuilder, 
> which could construct an instance of DFSTestUtil.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira