[jira] [Commented] (HDFS-140) When a file is deleted, its blocks remain in the blocksmap till the next block report from Datanode

2011-12-27 Thread dhruba borthakur (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176461#comment-13176461
 ] 

dhruba borthakur commented on HDFS-140:
---

hi uma, your technical point makes sense, but my feeling is that it is too late 
to roll it into 0.20 release. It is already fixed in newer releases, so new 
users will automatically get this fix. for people who are stuck with older 0.20 
based releases, they can pull this patch into their code base in their own, is 
it not? 

> When a file is deleted, its blocks remain in the blocksmap till the next 
> block report from Datanode
> ---
>
> Key: HDFS-140
> URL: https://issues.apache.org/jira/browse/HDFS-140
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.20.1
>Reporter: dhruba borthakur
>Assignee: Uma Maheswara Rao G
> Attachments: HDFS-140.20security205.patch
>
>
> When a file is deleted, the namenode sends out block deletions messages to 
> the appropriate datanodes. However, the namenode does not delete these blocks 
> from the blocksmap. Instead, the processing of the next block report from the 
> datanode causes these blocks to get removed from the blocksmap.
> If we desire to make block report processing less frequent, this issue needs 
> to be addressed. Also, this introduces indeterministic behaviout to a a few 
> unit tests. Another factor to consider is to ensure that duplicate block 
> detection is not compromised.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2698) BackupNode is downloading image from NameNode for every checkpoint

2011-12-27 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176421#comment-13176421
 ] 

Konstantin Boudnik commented on HDFS-2698:
--

+1 patch looks correct.

> BackupNode is downloading image from NameNode for every checkpoint
> --
>
> Key: HDFS-2698
> URL: https://issues.apache.org/jira/browse/HDFS-2698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: rollFSImage.patch, rollFSImage.patch
>
>
> BackupNode can make periodic checkpoints without downloading image and edits 
> files from the NameNode, but with just saving the namespace to local disks. 
> This is not happening because NN renews checkpoint time after every 
> checkpoint, thus making its image ahead of the BN's even though they are in 
> sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1910) when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time

2011-12-27 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176415#comment-13176415
 ] 

Konstantin Boudnik commented on HDFS-1910:
--

+1 patch looks good.

> when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice 
> every time
> 
>
> Key: HDFS-1910
> URL: https://issues.apache.org/jira/browse/HDFS-1910
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Gokul
>Priority: Minor
>  Labels: critical-0.22.0
> Attachments: saveImageOnce-v0.22.patch
>
>
> when image and edits dir are configured same, the fsimage flushing from 
> memory to disk will be done twice whenever saveNamespace is done. this may 
> impact the performance of backupnode/snn where it does a saveNamespace during 
> every checkpointing time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2263) Make DFSClient report bad blocks more quickly

2011-12-27 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176412#comment-13176412
 ] 

Harsh J commented on HDFS-2263:
---

Ah but checksum errors are always reported properly (changing block data, 
adding extra bits to start/end, etc.). How would we equate a 
truncated/unreadable block to a checksum failure?

> Make DFSClient report bad blocks more quickly
> -
>
> Key: HDFS-2263
> URL: https://issues.apache.org/jira/browse/HDFS-2263
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.20.2
>Reporter: Aaron T. Myers
>Assignee: Harsh J
> Attachments: HDFS-2263.patch
>
>
> In certain circumstances the DFSClient may detect a block as being bad 
> without reporting it promptly to the NN.
> If when reading a file a client finds an invalid checksum of a block, it 
> immediately reports that bad block to the NN. If when serving up a block a DN 
> finds a truncated block, it reports this to the client, but the client merely 
> adds that DN to the list of dead nodes and moves on to trying another DN, 
> without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2709) HA: Appropriately handle error conditions in EditLogTailer

2011-12-27 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2709:
-

Attachment: HDFS-2709-HDFS-1623.patch

Here's a patch which addresses the issue, by implementing option (a) - support 
reading from the middle of a finalized file.

I'd still like to run a few more tests, but this should be pretty close to 
final, so worth reviewing.

> HA: Appropriately handle error conditions in EditLogTailer
> --
>
> Key: HDFS-2709
> URL: https://issues.apache.org/jira/browse/HDFS-2709
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Aaron T. Myers
>Priority: Critical
> Attachments: HDFS-2709-HDFS-1623.patch
>
>
> Currently if the edit log tailer experiences an error replaying edits in the 
> middle of a file, it will go back to retrying from the beginning of the file 
> on the next tailing iteration. This is incorrect since many of the edits will 
> have already been replayed, and not all edits are idempotent.
> Instead, we either need to (a) support reading from the middle of a finalized 
> file (ie skip those edits already applied), or (b) abort the standby if it 
> hits an error while tailing. If "a" isn't simple, let's do "b" for now and 
> come back to 'a' later since this is a rare circumstance and better to abort 
> than be incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2640) Javadoc generation hangs

2011-12-27 Thread Vinod Kumar Vavilapalli (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176405#comment-13176405
 ] 

Vinod Kumar Vavilapalli commented on HDFS-2640:
---

This is still hanging for me on the latest trunk. Anyone else seeing it too?

> Javadoc generation hangs
> 
>
> Key: HDFS-2640
> URL: https://issues.apache.org/jira/browse/HDFS-2640
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.24.0, 0.23.1
>
> Attachments: HDFS-2640.patch
>
>
> Typing 'mvn javadoc:javadoc' causes the build to hang.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2263) Make DFSClient report bad blocks more quickly

2011-12-27 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176310#comment-13176310
 ] 

Aaron T. Myers commented on HDFS-2263:
--

Can we not do as Todd suggested? i.e. 'only report "corrupt" if we have a 
verifiable case of bad checksum' ? Is there some reason that's impossible to 
distinguish from other generic errors?

Though this issue is relatively low priority, I don't think there's any reason 
we won't fix the issue, so I'd rather just leave it open.

> Make DFSClient report bad blocks more quickly
> -
>
> Key: HDFS-2263
> URL: https://issues.apache.org/jira/browse/HDFS-2263
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.20.2
>Reporter: Aaron T. Myers
>Assignee: Harsh J
> Attachments: HDFS-2263.patch
>
>
> In certain circumstances the DFSClient may detect a block as being bad 
> without reporting it promptly to the NN.
> If when reading a file a client finds an invalid checksum of a block, it 
> immediately reports that bad block to the NN. If when serving up a block a DN 
> finds a truncated block, it reports this to the client, but the client merely 
> adds that DN to the list of dead nodes and moves on to trying another DN, 
> without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2263) Make DFSClient report bad blocks more quickly

2011-12-27 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176304#comment-13176304
 ] 

Harsh J commented on HDFS-2263:
---

Understood Todd. I believe the tests in TestClientReportBadBlock do explain 
this too -- I had not noticed that test before attempting this.

Aaron - Can we resolve this as a won't fix?

> Make DFSClient report bad blocks more quickly
> -
>
> Key: HDFS-2263
> URL: https://issues.apache.org/jira/browse/HDFS-2263
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.20.2
>Reporter: Aaron T. Myers
>Assignee: Harsh J
> Attachments: HDFS-2263.patch
>
>
> In certain circumstances the DFSClient may detect a block as being bad 
> without reporting it promptly to the NN.
> If when reading a file a client finds an invalid checksum of a block, it 
> immediately reports that bad block to the NN. If when serving up a block a DN 
> finds a truncated block, it reports this to the client, but the client merely 
> adds that DN to the list of dead nodes and moves on to trying another DN, 
> without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2263) Make DFSClient report bad blocks more quickly

2011-12-27 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176244#comment-13176244
 ] 

Todd Lipcon commented on HDFS-2263:
---

Haven't looked at the patch, but in general we should only report "corrupt" if 
we have a verifiable case of bad checksum. Other "generic errors" out of 
OP_READ_BLOCK shouldn't trigger a bad block being reported for the reason Harsh 
mentioned, even if it's the "final retry" -- eg maybe the client got 
partitioned from the DNs but not the NN. In that case we don't want it going 
and reporting bad blocks everywhere.

> Make DFSClient report bad blocks more quickly
> -
>
> Key: HDFS-2263
> URL: https://issues.apache.org/jira/browse/HDFS-2263
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.20.2
>Reporter: Aaron T. Myers
>Assignee: Harsh J
> Attachments: HDFS-2263.patch
>
>
> In certain circumstances the DFSClient may detect a block as being bad 
> without reporting it promptly to the NN.
> If when reading a file a client finds an invalid checksum of a block, it 
> immediately reports that bad block to the NN. If when serving up a block a DN 
> finds a truncated block, it reports this to the client, but the client merely 
> adds that DN to the list of dead nodes and moves on to trying another DN, 
> without reporting this to the NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2709) HA: Appropriately handle error conditions in EditLogTailer

2011-12-27 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176234#comment-13176234
 ] 

Eli Collins commented on HDFS-2709:
---

There are two high-level error cases here:
# Error reading edits from the shared mount (eg the NFS mount is flaky/failed) 
# Error applying an edit we've read

#1 seems most likely, in this case seems like we should retry a number of times 
and then abort (ie the failure isn't transient, in which case we're hosed). 
Since we need to apply edits in order I don't think we should readahead and 
queue up edits since we're likely to hit errors reading subsequent edits as 
well and we can't apply them until the earlier ones have succeeded. Queuing 
could potentially give a performance improvement by saving subsequent IO but it 
also means the SBN could use a lot more memory than the primary which is bad.

#2 should immediately abort as the edits are checksummed (and same rationale as 
ATM's comment above)


> HA: Appropriately handle error conditions in EditLogTailer
> --
>
> Key: HDFS-2709
> URL: https://issues.apache.org/jira/browse/HDFS-2709
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Aaron T. Myers
>Priority: Critical
>
> Currently if the edit log tailer experiences an error replaying edits in the 
> middle of a file, it will go back to retrying from the beginning of the file 
> on the next tailing iteration. This is incorrect since many of the edits will 
> have already been replayed, and not all edits are idempotent.
> Instead, we either need to (a) support reading from the middle of a finalized 
> file (ie skip those edits already applied), or (b) abort the standby if it 
> hits an error while tailing. If "a" isn't simple, let's do "b" for now and 
> come back to 'a' later since this is a rare circumstance and better to abort 
> than be incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2709) HA: Appropriately handle error conditions in EditLogTailer

2011-12-27 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176232#comment-13176232
 ] 

Todd Lipcon commented on HDFS-2709:
---

What about the case where the edit log is large enough that it doesn't fit in 
RAM? I suppose we can deal with this by setting the active NN to roll every N 
MB where N is fairly small?

> HA: Appropriately handle error conditions in EditLogTailer
> --
>
> Key: HDFS-2709
> URL: https://issues.apache.org/jira/browse/HDFS-2709
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, name-node
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Todd Lipcon
>Assignee: Aaron T. Myers
>Priority: Critical
>
> Currently if the edit log tailer experiences an error replaying edits in the 
> middle of a file, it will go back to retrying from the beginning of the file 
> on the next tailing iteration. This is incorrect since many of the edits will 
> have already been replayed, and not all edits are idempotent.
> Instead, we either need to (a) support reading from the middle of a finalized 
> file (ie skip those edits already applied), or (b) abort the standby if it 
> hits an error while tailing. If "a" isn't simple, let's do "b" for now and 
> come back to 'a' later since this is a rare circumstance and better to abort 
> than be incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2413) Add public APIs for safemode

2011-12-27 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176230#comment-13176230
 ] 

Eli Collins commented on HDFS-2413:
---

I don't think we want to wait inside the client, just expose the API so a 
user/management tool can see if the NN is in safemode (the tool can wait and 
re-query if it wants).

> Add public APIs for safemode
> 
>
> Key: HDFS-2413
> URL: https://issues.apache.org/jira/browse/HDFS-2413
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Harsh J
> Fix For: 0.24.0
>
> Attachments: HDFS-2413.patch
>
>
> Currently the APIs for safe-mode are part of DistributedFileSystem, which is 
> supposed to be a private interface. However, dependent software often wants 
> to wait until the NN is out of safemode. Though it could poll trying to 
> create a file and catching SafeModeException, we should consider making some 
> of these APIs public.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2707) HttpFS should read the hadoop-auth secret from a file instead inline from the configuration

2011-12-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176170#comment-13176170
 ] 

Hudson commented on HDFS-2707:
--

Integrated in Hadoop-Mapreduce-trunk #940 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/940/])
HDFS-2707. HttpFS should read the hadoop-auth secret from a file instead 
inline from the configuration. (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1224794
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/HttpAuthentication.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> HttpFS should read the hadoop-auth secret from a file instead inline from the 
> configuration
> ---
>
> Key: HDFS-2707
> URL: https://issues.apache.org/jira/browse/HDFS-2707
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.24.0, 0.23.1
>
> Attachments: HDFS-2707.patch, HDFS-2707.patch
>
>
> Similar to HADOOP-7621, the secret should be in a file other than the 
> configuration file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2707) HttpFS should read the hadoop-auth secret from a file instead inline from the configuration

2011-12-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176163#comment-13176163
 ] 

Hudson commented on HDFS-2707:
--

Integrated in Hadoop-Mapreduce-0.23-Build #141 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/141/])
Merge -r 1224793:1224794 from trunk to branch. FIXES: HDFS-2707

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1224795
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/HttpAuthentication.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/AuthFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> HttpFS should read the hadoop-auth secret from a file instead inline from the 
> configuration
> ---
>
> Key: HDFS-2707
> URL: https://issues.apache.org/jira/browse/HDFS-2707
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.24.0, 0.23.1
>
> Attachments: HDFS-2707.patch, HDFS-2707.patch
>
>
> Similar to HADOOP-7621, the secret should be in a file other than the 
> configuration file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2707) HttpFS should read the hadoop-auth secret from a file instead inline from the configuration

2011-12-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176160#comment-13176160
 ] 

Hudson commented on HDFS-2707:
--

Integrated in Hadoop-Hdfs-trunk #907 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/907/])
HDFS-2707. HttpFS should read the hadoop-auth secret from a file instead 
inline from the configuration. (tucu)

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1224794
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/HttpAuthentication.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/AuthFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> HttpFS should read the hadoop-auth secret from a file instead inline from the 
> configuration
> ---
>
> Key: HDFS-2707
> URL: https://issues.apache.org/jira/browse/HDFS-2707
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.24.0, 0.23.1
>
> Attachments: HDFS-2707.patch, HDFS-2707.patch
>
>
> Similar to HADOOP-7621, the secret should be in a file other than the 
> configuration file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2707) HttpFS should read the hadoop-auth secret from a file instead inline from the configuration

2011-12-27 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176158#comment-13176158
 ] 

Hudson commented on HDFS-2707:
--

Integrated in Hadoop-Hdfs-0.23-Build #120 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/120/])
Merge -r 1224793:1224794 from trunk to branch. FIXES: HDFS-2707

tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1224795
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/HttpAuthentication.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/conf/httpfs-signature.secret
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/AuthFilter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/resources/httpfs-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/TestHttpFSFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> HttpFS should read the hadoop-auth secret from a file instead inline from the 
> configuration
> ---
>
> Key: HDFS-2707
> URL: https://issues.apache.org/jira/browse/HDFS-2707
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.24.0, 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.24.0, 0.23.1
>
> Attachments: HDFS-2707.patch, HDFS-2707.patch
>
>
> Similar to HADOOP-7621, the secret should be in a file other than the 
> configuration file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2547) ReplicationTargetChooser has incorrect block placement comments

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2547:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks Aaron for your persistence on this one. Committed to branch-1.

> ReplicationTargetChooser has incorrect block placement comments
> ---
>
> Key: HDFS-2547
> URL: https://issues.apache.org/jira/browse/HDFS-2547
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.20.1
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Trivial
> Fix For: 1.1.0
>
> Attachments: HDFS-2547.patch, HDFS-2547.patch
>
>
> {code}
> /** The class is responsible for choosing the desired number of targets
>  * for placing block replicas.
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on the same rack as the **first replca**.
>  */
> {code}
> That should read "second replica". The test cases confirm that this is the 
> behavior, as well as the docs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176137#comment-13176137
 ] 

Hadoop QA commented on HDFS-554:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12508666/HDFS-554.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 20 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

-1 release audit.  The applied patch generated 1 release audit warnings 
(more than the trunk's current 0 warnings).

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1741//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1741//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/1741//artifact/trunk/hadoop-hdfs-project/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/1741//console

This message is automatically generated.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Status: Patch Available  (was: Open)

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Status: Open  (was: Patch Available)

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-554) BlockInfo.ensureCapacity may get a speedup from System.arraycopy()

2011-12-27 Thread Harsh J (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-554:
-

Attachment: HDFS-554.txt

For some reason patch didn't trigger a jenkins build. Re-upping same thing.

> BlockInfo.ensureCapacity may get a speedup from System.arraycopy()
> --
>
> Key: HDFS-554
> URL: https://issues.apache.org/jira/browse/HDFS-554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: HDFS-554.patch, HDFS-554.txt, HDFS-554.txt
>
>
> BlockInfo.ensureCapacity() uses a for() loop to copy the old array data into 
> the expanded array.  {{System.arraycopy()}} is generally much faster for 
> this, as it can do a bulk memory copy. There is also the typesafe Java6 
> {{Arrays.copyOf()}} to consider, though here it offers no tangible benefit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1314) dfs.block.size accepts only absolute value

2011-12-27 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176115#comment-13176115
 ] 

Harsh J commented on HDFS-1314:
---

Thanks for the revised patch Sho - this looks good. Only a few further, mostly 
docs related, comments that pertain to HDFS:

- For hdfs.c, would it be possible for you to also switch the blockSize and 
replication loading to use jFS's getDefaultBlockSize and getDefaultReplication 
calls instead of loading from Configuration? Would be a nice cleanup but if you 
want to pursue this on another JIRA please open a new one and we can do it 
there overall.
- You can add more documentation notes to cluster_setup, explaining the most 
common prefixes (k, m, g) can be provided for block sizes instead of long 
values. Also worth it if you add the prefix list to the {{hdfs-default.xml}} 
docs.

> dfs.block.size accepts only absolute value
> --
>
> Key: HDFS-1314
> URL: https://issues.apache.org/jira/browse/HDFS-1314
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Karim Saadah
>Assignee: Sho Shimauchi
>Priority: Minor
>  Labels: newbie
> Attachments: hdfs-1314.txt, hdfs-1314.txt
>
>
> Using "dfs.block.size=8388608" works 
> but "dfs.block.size=8mb" does not.
> Using "dfs.block.size=8mb" should throw some WARNING on NumberFormatException.
> (http://pastebin.corp.yahoo.com/56129)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira