[jira] [Commented] (HDFS-782) dynamic replication

2012-11-19 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500886#comment-13500886
 ] 

Harsh J commented on HDFS-782:
--

Hey Putu,

I do not know of anyone already working on this, but you are welcome to!

> dynamic replication
> ---
>
> Key: HDFS-782
> URL: https://issues.apache.org/jira/browse/HDFS-782
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Ning Zhang
>
> In a large and busy cluster, a block can be requested by many clients at the 
> same time. HDFS-767 tries to solve the failing case when the # of retries 
> exceeds the maximum # of retries. However, that patch doesn't solve the 
> performance issue since all failing clients have to wait a certain period 
> before retry, and the # of retries could be high. 
> One solution to solve the performance issue is to increase the # of replicas 
> for this "hot" block dynamically when it is requested many times at a short 
> period. The name node need to be aware such situation and only clean up extra 
> replicas when they are not accessed recently. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4208) NameNode could be stuck in SafeMode due to incomplete blocks in branch-1

2012-11-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4208:
-

Description: 
In one test case, NameNode allocated a block and then was killed before the 
client got the addBlock response. After NameNode restarted, it couldn't get out 
of SafeMode waiting for the block which was never created. In trunk, NameNode 
can get out of SafeMode since it only counts complete blocks. However branch-1 
doesn't have the clear notion of under-constructioned-block in Namenode. 

JIRA HDFS-4212 is to track the never-created-block issue and this JIRA is to 
fix NameNode in branch-1 so it can get out of SafeMode when never-created-block 
exists.

The proposed idea is for SafeMode not to count the zero-size last block in a 
under-construction file as part of total blcok count.

  was:As in trunk, SafeMode should count only complete blocks in branch-1.


> NameNode could be stuck in SafeMode due to incomplete blocks in branch-1
> 
>
> Key: HDFS-4208
> URL: https://issues.apache.org/jira/browse/HDFS-4208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4208.branch-1.patch
>
>
> In one test case, NameNode allocated a block and then was killed before the 
> client got the addBlock response. After NameNode restarted, it couldn't get 
> out of SafeMode waiting for the block which was never created. In trunk, 
> NameNode can get out of SafeMode since it only counts complete blocks. 
> However branch-1 doesn't have the clear notion of under-constructioned-block 
> in Namenode. 
> JIRA HDFS-4212 is to track the never-created-block issue and this JIRA is to 
> fix NameNode in branch-1 so it can get out of SafeMode when 
> never-created-block exists.
> The proposed idea is for SafeMode not to count the zero-size last block in a 
> under-construction file as part of total blcok count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4208) NameNode could be stuck in SafeMode due to never-created blocks in branch-1

2012-11-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4208:
-

Summary: NameNode could be stuck in SafeMode due to never-created blocks in 
branch-1  (was: NameNode could be stuck in SafeMode due to incomplete blocks in 
branch-1)

> NameNode could be stuck in SafeMode due to never-created blocks in branch-1
> ---
>
> Key: HDFS-4208
> URL: https://issues.apache.org/jira/browse/HDFS-4208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4208.branch-1.patch
>
>
> In one test case, NameNode allocated a block and then was killed before the 
> client got the addBlock response. After NameNode restarted, it couldn't get 
> out of SafeMode waiting for the block which was never created. In trunk, 
> NameNode can get out of SafeMode since it only counts complete blocks. 
> However branch-1 doesn't have the clear notion of under-constructioned-block 
> in Namenode. 
> JIRA HDFS-4212 is to track the never-created-block issue and this JIRA is to 
> fix NameNode in branch-1 so it can get out of SafeMode when 
> never-created-block exists.
> The proposed idea is for SafeMode not to count the zero-size last block in a 
> under-construction file as part of total blcok count.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4212) NameNode can't differentiate between a never-created block and a block which is really missing

2012-11-19 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4212:


 Summary: NameNode can't differentiate between a never-created 
block and a block which is really missing
 Key: HDFS-4212
 URL: https://issues.apache.org/jira/browse/HDFS-4212
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 1.2.0, 3.0.0
Reporter: Brandon Li


In one test case, NameNode allocated a block and then was killed before the 
client got the addBlock response. 

After NameNode restarted, the block which was never created was considered as a 
missing block and FSCK would report the file is corrupted.

The problem seems to be that, NameNode can't differentiate between a 
never-created block and a block which is really missing.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4208) NameNode could be stuck in SafeMode due to incomplete blocks in branch-1

2012-11-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4208:
-

Summary: NameNode could be stuck in SafeMode due to incomplete blocks in 
branch-1  (was:  SafeMode should count only complete blocks in branch-1)

> NameNode could be stuck in SafeMode due to incomplete blocks in branch-1
> 
>
> Key: HDFS-4208
> URL: https://issues.apache.org/jira/browse/HDFS-4208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4208.branch-1.patch
>
>
> As in trunk, SafeMode should count only complete blocks in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4208) SafeMode should count only complete blocks in branch-1

2012-11-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500877#comment-13500877
 ] 

Brandon Li commented on HDFS-4208:
--

Hi Uma, thanks for reviewing the patch. Yes, fsync'ed blocks may not be in 
complete state. 
Let me update the JIRA title and description to better clarify the problem.

>  SafeMode should count only complete blocks in branch-1
> ---
>
> Key: HDFS-4208
> URL: https://issues.apache.org/jira/browse/HDFS-4208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4208.branch-1.patch
>
>
> As in trunk, SafeMode should count only complete blocks in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4208) SafeMode should count only complete blocks in branch-1

2012-11-19 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500865#comment-13500865
 ] 

Brandon Li commented on HDFS-4208:
--

test-patch result:
{noformat}
-1 overall.  
+1 @author.  The patch does not contain any @author tags.
+1 tests included.  The patch appears to include 3 new or modified tests.
+1 javadoc.  The javadoc tool did not generate any warning messages.
+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.
-1 findbugs.  The patch appears to introduce 200 new Findbugs (version 
2.0.0) warnings.
{noformat}
This patch doesn't introduce new findbugs warnings.

>  SafeMode should count only complete blocks in branch-1
> ---
>
> Key: HDFS-4208
> URL: https://issues.apache.org/jira/browse/HDFS-4208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4208.branch-1.patch
>
>
> As in trunk, SafeMode should count only complete blocks in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-782) dynamic replication

2012-11-19 Thread Putu Yuwono (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500862#comment-13500862
 ] 

Putu Yuwono commented on HDFS-782:
--

Is there anybody who's been working on this issue?
It's been 3 years but still unresolved.


> dynamic replication
> ---
>
> Key: HDFS-782
> URL: https://issues.apache.org/jira/browse/HDFS-782
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Ning Zhang
>
> In a large and busy cluster, a block can be requested by many clients at the 
> same time. HDFS-767 tries to solve the failing case when the # of retries 
> exceeds the maximum # of retries. However, that patch doesn't solve the 
> performance issue since all failing clients have to wait a certain period 
> before retry, and the # of retries could be high. 
> One solution to solve the performance issue is to increase the # of replicas 
> for this "hot" block dynamically when it is requested many times at a short 
> period. The name node need to be aware such situation and only clean up extra 
> replicas when they are not accessed recently. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4208) SafeMode should count only complete blocks in branch-1

2012-11-19 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500856#comment-13500856
 ] 

Uma Maheswara Rao G commented on HDFS-4208:
---

Hi Brandon,

  What about fsync'ed blocks. They may not be in completed state right?

Regards,
Uma

>  SafeMode should count only complete blocks in branch-1
> ---
>
> Key: HDFS-4208
> URL: https://issues.apache.org/jira/browse/HDFS-4208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4208.branch-1.patch
>
>
> As in trunk, SafeMode should count only complete blocks in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4208) SafeMode should count only complete blocks in branch-1

2012-11-19 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4208:
-

Attachment: HDFS-4208.branch-1.patch

>  SafeMode should count only complete blocks in branch-1
> ---
>
> Key: HDFS-4208
> URL: https://issues.apache.org/jira/browse/HDFS-4208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4208.branch-1.patch
>
>
> As in trunk, SafeMode should count only complete blocks in branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4179) BackupNode: allow reads, fix checkpointing, safeMode

2012-11-19 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500619#comment-13500619
 ] 

Konstantin Boudnik commented on HDFS-4179:
--

+1 patch looks good. Tests are passing on the trunk.

> BackupNode: allow reads, fix checkpointing, safeMode
> 
>
> Key: HDFS-4179
> URL: https://issues.apache.org/jira/browse/HDFS-4179
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 2.0.2-alpha
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Attachments: BNAllowReads.patch
>
>
> BackupNode should be allowed to accept read command. Needs some adjustments 
> in checkpointing and with safe mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500615#comment-13500615
 ] 

Robert Joseph Evans commented on HDFS-4199:
---

The changes look fairly simple and straight forward.  They also match the code. 
 However, I am just a bit concerned that we are testing/locking in 
functionality that is arguably wrong.

We are testing that new HdfsVolumeId(A, false).equals(new HdfsVolumeId(A, 
true)).  If you look at how the code actually works it starts out by creating a 
bunch of invalid ids with null for the id.  Then it goes off and replaces them 
with valid IDs once it finds them.  I personally don't think that a valid 
volume ID should ever be equal to an invalid one.  I added Andrew who 
originally wrote this code to see if he can take a look at it and tell us if 
this is expected behavior on not.

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199--b.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4209) Clean up FSDirectory and INode

2012-11-19 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500498#comment-13500498
 ] 

Colin Patrick McCabe commented on HDFS-4209:


This looks good.  I especially like how you replaced a lot of places where we 
were checking for {{!symlink && !directory}} with checking for {{file}} 
directly.

{code}
  void addToParentForImage(INodeDirectory parent, INode newNode)
{code}

Does it make sense to call this {{addToParentUnlocked}}, or something like 
that?  I mean every directory we add is "for the image" since that's where 
directories live.  But most functions take the writeLock, and this one doesn't.

{code}
-  void setQuota(String src, long nsQuota, long dsQuota) 
-throws FileNotFoundException, QuotaExceededException,
-UnresolvedLinkException { 
+  void setQuota(String src, long nsQuota, long dsQuota) throws IOException  { 
{code}

It seems like the new throw specification provides less information to the 
caller than the old one.  Considering that people often rely on throw specs as 
a kind of documentation about how the function can fail, are we sure we want to 
remove this?

> Clean up FSDirectory and INode
> --
>
> Key: HDFS-4209
> URL: https://issues.apache.org/jira/browse/HDFS-4209
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Attachments: h4209_20121118b.patch, h4209_20121118.patch
>
>
> - FSDirectory.addToParent(..) is only used by image loading so that 
> synchronization, modification time update and space count update are not 
> needed.
> - There are multiple places checking whether an inode is file by checking 
> !isDirectory() && !isSymlink().  Let's add isFile() to INode.
> - In the addNode/addChild/addChildNoQuotaCheck methods, returning the same 
> INode back is not useful.  It is better to simply return a boolean to 
> indicate whether the inode is added.  Also, the value of childDiskspace 
> parameter is always UNKNOWN_DISK_SPACE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4178) shell scripts should not close stderr

2012-11-19 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500496#comment-13500496
 ] 

Andy Isaacson commented on HDFS-4178:
-

bq. (I was under the misimpression that fd 0-2 were reserved unless explicit 
opened, or that they were redirected to /dev/null after dropping the 
controlling terminal)

Yeah, it's really surprising that this pitfall is left lurking for people to 
stumble into!  There's no credible use case for leaving fd 0,1,2 closed during 
process startup and it would be a huge win for {{_start}} to open {{/dev/null}} 
as appropriate before running {{main()}}, but unfortunately I've confirmed that 
this is not done and I actually did experimentally trigger the "glibc detected 
bad free"-in-my-datafile failure mode under glibc 2.13 running with {{at}}.

Thanks for committing the fix!

> shell scripts should not close stderr
> -
>
> Key: HDFS-4178
> URL: https://issues.apache.org/jira/browse/HDFS-4178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs4178.txt
>
>
> The {{start-dfs.sh}} and {{stop-dfs.sh}} scripts close stderr for some 
> subprocesses using the construct
> bq. {{2>&-}}
> This is dangerous because child processes started up under this scenario will 
> re-use filedescriptor 2 for opened files.  Since libc and many other 
> codepaths assume that filedescriptor 2 can be written to in error conditions, 
> this can potentially result in data corruption.
> Much better to redirect stderr using the construct {{2>/dev/null}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500494#comment-13500494
 ] 

Hadoop QA commented on HDFS-4199:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12554194/HDFS-4199--b.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3543//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3543//console

This message is automatically generated.

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199--b.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4210) NameNode Format should not fail for DNS resolution on minority of JournalNode

2012-11-19 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500475#comment-13500475
 ] 

Colin Patrick McCabe commented on HDFS-4210:


It should definitely throw a more helpful exception than 
{{NullPointerException}}.  However, I think the general idea that the quorum 
format should fail if some {{JournalNodes}} could not be formatted makes some 
sense.  If some {{JournalNodes}} could not be formatted, the system is running 
at reduced redundancy.  This could cause major problems down the road if we 
silently return success here.

Can you write a script to wait until all nodes are accessible (keep pinging 
every second until you get through, or something like that)?

Alternately, perhaps we could add a switch like {{\-partial}} that would return 
success from a partial format as long as a quorum of JNs got formatted.  But I 
don't think it should be the default...

> NameNode Format should not fail for DNS resolution on minority of JournalNode
> -
>
> Key: HDFS-4210
> URL: https://issues.apache.org/jira/browse/HDFS-4210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, journal-node, name-node
>Affects Versions: 2.0.0-alpha
> Environment: CDH4.1.2
>Reporter: Damien Hardy
>Priority: Trivial
>
> Setting  : 
>   qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
>   cdh4master01 and cdh4master02 JournalNode up and running, 
>   cdh4worker03 not yet provisionning (no DNS entrie)
> With :
> `hadoop namenode -format` fails with :
>   12/11/19 14:42:42 FATAL namenode.NameNode: Exception in namenode join
> java.lang.IllegalArgumentException: Unable to construct journal, 
> qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1235)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:745)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1099)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1233)
>   ... 5 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getName(IPCLoggerChannelMetrics.java:107)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create(IPCLoggerChannelMetrics.java:91)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.(IPCLoggerChannel.java:161)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:141)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:353)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:135)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:104)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:93)
>   ... 10 more
> I suggest that if quorum is up format should not fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HDFS-4207) all hadoop fs operations fail if the default fs is down -even if they don't go near it.

2012-11-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reopened HDFS-4207:
---


Steve, I am reopening this. Given this change is useful, lets backport 
HADOOP-7207 using this jira. 

> all hadoop fs operations fail if the default fs is down -even if they don't 
> go near it.
> ---
>
> Key: HDFS-4207
> URL: https://issues.apache.org/jira/browse/HDFS-4207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.4
>Reporter: Steve Loughran
>Priority: Minor
>
> you can't do any {{hadoop fs}} commands against any hadoop filesystem (e.g, 
> s3://, a remote hdfs://, webhdfs://) if the default FS of the client is 
> offline. Only operations that need the local fs should be expected to fail in 
> this situation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4210) NameNode Format should not fail for DNS resolution on minority of JournalNode

2012-11-19 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500466#comment-13500466
 ] 

Todd Lipcon commented on HDFS-4210:
---

Just noticed the NPE in your stack trace, though - that definitely seems worth 
fixing to give a proper DNS error.

> NameNode Format should not fail for DNS resolution on minority of JournalNode
> -
>
> Key: HDFS-4210
> URL: https://issues.apache.org/jira/browse/HDFS-4210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, journal-node, name-node
>Affects Versions: 2.0.0-alpha
> Environment: CDH4.1.2
>Reporter: Damien Hardy
>Priority: Trivial
>
> Setting  : 
>   qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
>   cdh4master01 and cdh4master02 JournalNode up and running, 
>   cdh4worker03 not yet provisionning (no DNS entrie)
> With :
> `hadoop namenode -format` fails with :
>   12/11/19 14:42:42 FATAL namenode.NameNode: Exception in namenode join
> java.lang.IllegalArgumentException: Unable to construct journal, 
> qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1235)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:745)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1099)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1233)
>   ... 5 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getName(IPCLoggerChannelMetrics.java:107)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create(IPCLoggerChannelMetrics.java:91)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.(IPCLoggerChannel.java:161)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:141)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:353)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:135)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:104)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:93)
>   ... 10 more
> I suggest that if quorum is up format should not fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4210) NameNode Format should not fail for DNS resolution on minority of JournalNode

2012-11-19 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500464#comment-13500464
 ] 

Todd Lipcon commented on HDFS-4210:
---

Currently we dont have any way to format a minority of nodes in the quorum. So, 
if we allowed you to format with missing nodes, then there'd be no easy way to 
format the new node when you add it (aside from rsync). Given that, we made the 
decision (for now) to only succeed formatting if all of the nodes are up.

What's the use case where you want to start running with only 2 nodes up? It 
wouldn't provide any redundancy over just having 1 node.

> NameNode Format should not fail for DNS resolution on minority of JournalNode
> -
>
> Key: HDFS-4210
> URL: https://issues.apache.org/jira/browse/HDFS-4210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, journal-node, name-node
>Affects Versions: 2.0.0-alpha
> Environment: CDH4.1.2
>Reporter: Damien Hardy
>Priority: Trivial
>
> Setting  : 
>   qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
>   cdh4master01 and cdh4master02 JournalNode up and running, 
>   cdh4worker03 not yet provisionning (no DNS entrie)
> With :
> `hadoop namenode -format` fails with :
>   12/11/19 14:42:42 FATAL namenode.NameNode: Exception in namenode join
> java.lang.IllegalArgumentException: Unable to construct journal, 
> qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1235)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:745)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1099)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1233)
>   ... 5 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getName(IPCLoggerChannelMetrics.java:107)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create(IPCLoggerChannelMetrics.java:91)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.(IPCLoggerChannel.java:161)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:141)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:353)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:135)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:104)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:93)
>   ... 10 more
> I suggest that if quorum is up format should not fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4207) all hadoop fs operations fail if the default fs is down -even if they don't go near it.

2012-11-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-4207.
--

Resolution: Duplicate

> all hadoop fs operations fail if the default fs is down -even if they don't 
> go near it.
> ---
>
> Key: HDFS-4207
> URL: https://issues.apache.org/jira/browse/HDFS-4207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.4
>Reporter: Steve Loughran
>Priority: Minor
>
> you can't do any {{hadoop fs}} commands against any hadoop filesystem (e.g, 
> s3://, a remote hdfs://, webhdfs://) if the default FS of the client is 
> offline. Only operations that need the local fs should be expected to fail in 
> this situation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-1824) delay instantiation of file system object until it is needed (linked to HADOOP-7207)

2012-11-19 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-1824:
-

Fix Version/s: 0.23.0

> delay instantiation of file system object until it is needed (linked to 
> HADOOP-7207)
> 
>
> Key: HDFS-1824
> URL: https://issues.apache.org/jira/browse/HDFS-1824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Fix For: 0.23.0
>
> Attachments: HDFS-1824-1-22.patch, HDFS-1824-1.patch, HDFS-1824.patch
>
>
> also re-factor the code little bit to avoid checking for instance of DFS in 
> multiple places. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4207) all hadoop fs operations fail if the default fs is down -even if they don't go near it.

2012-11-19 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500430#comment-13500430
 ] 

Steve Loughran commented on HDFS-4207:
--

looks like it, though there's no version tags in HADOOP-7207; I'll track that 
down with changes.txt

> all hadoop fs operations fail if the default fs is down -even if they don't 
> go near it.
> ---
>
> Key: HDFS-4207
> URL: https://issues.apache.org/jira/browse/HDFS-4207
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 1.0.4
>Reporter: Steve Loughran
>Priority: Minor
>
> you can't do any {{hadoop fs}} commands against any hadoop filesystem (e.g, 
> s3://, a remote hdfs://, webhdfs://) if the default FS of the client is 
> offline. Only operations that need the local fs should be expected to fail in 
> this situation

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HDFS-4199:
-

Attachment: (was: HDFS-4199.patch)

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199--b.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HDFS-4199:
-

Attachment: HDFS-4199--b.patch

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199--b.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HDFS-4199:
-

Attachment: HDFS-4199.patch

fixed and tested case HdfsVolumeId#compareTo(null)

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4178) shell scripts should not close stderr

2012-11-19 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4178:
--

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Yes, I have committed to trunk and branch-2.  Thanks Andy!  (I was under the 
misimpression that fd 0-2 were reserved unless explicit opened, or that they 
were redirected to /dev/null after dropping the controlling terminal)

> shell scripts should not close stderr
> -
>
> Key: HDFS-4178
> URL: https://issues.apache.org/jira/browse/HDFS-4178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hdfs4178.txt
>
>
> The {{start-dfs.sh}} and {{stop-dfs.sh}} scripts close stderr for some 
> subprocesses using the construct
> bq. {{2>&-}}
> This is dangerous because child processes started up under this scenario will 
> re-use filedescriptor 2 for opened files.  Since libc and many other 
> codepaths assume that filedescriptor 2 can be written to in error conditions, 
> this can potentially result in data corruption.
> Much better to redirect stderr using the construct {{2>/dev/null}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4178) shell scripts should not close stderr

2012-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500295#comment-13500295
 ] 

Hudson commented on HDFS-4178:
--

Integrated in Hadoop-trunk-Commit #3043 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3043/])
HDFS-4178. Shell scripts should not close stderr (Andy Isaacson via daryn) 
(Revision 1411229)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1411229
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.sh


> shell scripts should not close stderr
> -
>
> Key: HDFS-4178
> URL: https://issues.apache.org/jira/browse/HDFS-4178
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hdfs4178.txt
>
>
> The {{start-dfs.sh}} and {{stop-dfs.sh}} scripts close stderr for some 
> subprocesses using the construct
> bq. {{2>&-}}
> This is dangerous because child processes started up under this scenario will 
> re-use filedescriptor 2 for opened files.  Since libc and many other 
> codepaths assume that filedescriptor 2 can be written to in error conditions, 
> this can potentially result in data corruption.
> Much better to redirect stderr using the construct {{2>/dev/null}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2049) Should add the column name tips for HDFS shell command

2012-11-19 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500279#comment-13500279
 ] 

Daryn Sharp commented on HDFS-2049:
---

Since {{ContentSummary#toString(boolean}} displays a header, doesn't this 
double up the header output?  I haven't tried the patch, but looks that way 
from the test?  Minor pre-existing nit is "QUATA" is misspelled.

> Should add the column name tips for HDFS shell command
> --
>
> Key: HDFS-2049
> URL: https://issues.apache.org/jira/browse/HDFS-2049
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs client
>Affects Versions: 0.21.0
>Reporter: Denny Ye
>Assignee: liang xie
>  Labels: comand, newbie, shell
> Fix For: 3.0.0
>
> Attachments: HDFS-2049.txt, HDFS-2049.txt
>
>
> :abc:root > bin/hadoop fs -count -q /ABC
> 11/06/07 18:05:54 INFO security.Groups: Group mapping 
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
> cacheTimeout=30
> 11/06/07 18:05:54 WARN conf.Configuration: mapred.task.id is deprecated. 
> Instead, use mapreduce.task.attempt.id
> none infnone inf   13 
>  372  0 hdfs://abc:9000/ABC
> --
> I got the result columns without column name. It make me confused. Actually, 
> it should like this in a kindly matter.
> :abc:root > bin/hadoop fs -count -q /ABC
> 11/06/07 18:05:54 INFO security.Groups: Group mapping 
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
> cacheTimeout=30
> 11/06/07 18:05:54 WARN conf.Configuration: mapred.task.id is deprecated. 
> Instead, use mapreduce.task.attempt.id
> QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, 
> DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME
> none infnone inf   13 
>  372  0 hdfs://abc:9000/ABC

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500246#comment-13500246
 ] 

Hadoop QA commented on HDFS-3970:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12554166/HDFS-3970.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3542//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3542//console

This message is automatically generated.

> BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
> of DataStorage to read prev version file.
> ---
>
> Key: HDFS-3970
> URL: https://issues.apache.org/jira/browse/HDFS-3970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-3970.patch
>
>
> {code}// read attributes out of the VERSION file of previous directory
> DataStorage prevInfo = new DataStorage();
> prevInfo.readPreviousVersionProperties(bpSd);{code}
> In the above code snippet BlockPoolSliceStorage instance should be used. 
> other wise rollback results in 'storageType' property missing which will not 
> be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4210) NameNode Format should not fail for DNS resolution on minority of JournalNode

2012-11-19 Thread Damien Hardy (JIRA)
Damien Hardy created HDFS-4210:
--

 Summary: NameNode Format should not fail for DNS resolution on 
minority of JournalNode
 Key: HDFS-4210
 URL: https://issues.apache.org/jira/browse/HDFS-4210
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, journal-node, name-node
Affects Versions: 2.0.0-alpha
 Environment: CDH4.1.2
Reporter: Damien Hardy
Priority: Trivial


Setting  : 
  qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
  cdh4master01 and cdh4master02 JournalNode up and running, 
  cdh4worker03 not yet provisionning (no DNS entrie)

With :
`hadoop namenode -format` fails with :
  12/11/19 14:42:42 FATAL namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: Unable to construct journal, 
qjournal://cdh4master01:8485;cdh4master02:8485;cdh4worker03:8485/hdfscluster
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1235)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:226)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:193)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:745)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1099)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1233)
... 5 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getName(IPCLoggerChannelMetrics.java:107)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create(IPCLoggerChannelMetrics.java:91)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.(IPCLoggerChannel.java:161)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:141)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:353)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:135)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:104)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:93)
... 10 more

I suggest that if quorum is up format should not fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4196) Support renaming of snapshots

2012-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500218#comment-13500218
 ] 

Hudson commented on HDFS-4196:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #16 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/16/])
HDFS-4196. Support renaming of snapshots. Contributed by Jing Zhao 
(Revision 1410986)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1410986
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOpCodes.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotTestHelper.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotRename.java


> Support renaming of snapshots
> -
>
> Key: HDFS-4196
> URL: https://issues.apache.org/jira/browse/HDFS-4196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node, name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4196.001.patch, HDFS-4196.002.patch, 
> HDFS-4196.003.patch, HDFS-4196.004.patch
>
>
> Add the functionality of renaming an existing snapshot with a given new name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4206) Change the fields in INode and its subclasses to private

2012-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500215#comment-13500215
 ] 

Hudson commented on HDFS-4206:
--

Integrated in Hadoop-Mapreduce-trunk #1262 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1262/])
HDFS-4206. Change the fields in INode and its subclasses to private. 
(Revision 1410996)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1410996
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryWithQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java


> Change the fields in INode and its subclasses to private
> 
>
> Key: HDFS-4206
> URL: https://issues.apache.org/jira/browse/HDFS-4206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 3.0.0
>
> Attachments: h4206_20121116b.patch, h4206_20121116.patch
>
>
> The fields in INode and its subclasses are sometimes directly accessed and 
> modified by other code.  It is better to change them to private and use 
> getters/setters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500214#comment-13500214
 ] 

Ivan A. Veselovsky commented on HDFS-4199:
--

the patch just adds one new test. It cannot anyhow affect the test 
org.apache.hadoop.hdfs.TestSafeMode -- it likely failed for another reason.

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4206) Change the fields in INode and its subclasses to private

2012-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500210#comment-13500210
 ] 

Hudson commented on HDFS-4206:
--

Integrated in Hadoop-Hdfs-trunk #1231 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1231/])
HDFS-4206. Change the fields in INode and its subclasses to private. 
(Revision 1410996)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1410996
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryWithQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java


> Change the fields in INode and its subclasses to private
> 
>
> Key: HDFS-4206
> URL: https://issues.apache.org/jira/browse/HDFS-4206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 3.0.0
>
> Attachments: h4206_20121116b.patch, h4206_20121116.patch
>
>
> The fields in INode and its subclasses are sometimes directly accessed and 
> modified by other code.  It is better to change them to private and use 
> getters/setters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500193#comment-13500193
 ] 

Hadoop QA commented on HDFS-4199:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12554147/HDFS-4199.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestSafeMode

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3541//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3541//console

This message is automatically generated.

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4190) Read complete block into memory once in BlockScanning and reduce concurrent disk access

2012-11-19 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500189#comment-13500189
 ] 

Uma Maheswara Rao G commented on HDFS-4190:
---

Thanks a lot Nicholas and Todd for the comments. I am just trying to refactor 
BlockReadLocal class and use that in scanning. Let me check what improvement we 
can get.

> Read complete block into memory once in BlockScanning and reduce concurrent 
> disk access
> ---
>
> Key: HDFS-4190
> URL: https://issues.apache.org/jira/browse/HDFS-4190
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 3.0.0
>Reporter: Uma Maheswara Rao G
>
> When we perform bulk write operations to DFS we observed that block scan is 
> one bottleneck for concurrent disk access.
> To see real load on disks, keep single data node and local client flushing 
> data to DFS.
> When we switch off block scanning we have seen >10% improvement. I will 
> update real figures in comment.
> Even though I am doing only write operation, implicitly there will be a read 
> operation for each block due to block scanning. Next scan will happen only 
> after 21 days, but once scan will happen after adding the block. This will be 
> the concurrent access to disks.
> Other point to note is that, we will read the block, packet by packet in 
> block scanning as well. We know that, we have to read&scan complete block, 
> so, it may be correct to load complete block once and do checksums 
> verification for that data?
> I tried with MemoryMappedBuffers:
> mapped the complete block once in blockScanning and does the checksum 
> verification with that. Seen good improvement in that bulk write scenario.
> But we don't have any API to clean the mapped buffer immediately. With my 
> experiment I just used, Cleaner class from sun package. That will not be 
> correct to use in production. So, we have to write JNI call to clean that 
> mmapped buffer.
> I am not sure I missed something here. please correct me If i missed some 
> points.
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2012-11-19 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-3970:


Attachment: HDFS-3970.patch

Attached the patch

> BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
> of DataStorage to read prev version file.
> ---
>
> Key: HDFS-3970
> URL: https://issues.apache.org/jira/browse/HDFS-3970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-3970.patch
>
>
> {code}// read attributes out of the VERSION file of previous directory
> DataStorage prevInfo = new DataStorage();
> prevInfo.readPreviousVersionProperties(bpSd);{code}
> In the above code snippet BlockPoolSliceStorage instance should be used. 
> other wise rollback results in 'storageType' property missing which will not 
> be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3970) BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead of DataStorage to read prev version file.

2012-11-19 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-3970:


Status: Patch Available  (was: Open)

> BlockPoolSliceStorage#doRollback(..) should use BlockPoolSliceStorage instead 
> of DataStorage to read prev version file.
> ---
>
> Key: HDFS-3970
> URL: https://issues.apache.org/jira/browse/HDFS-3970
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.2-alpha, 3.0.0
>Reporter: Vinay
>Assignee: Vinay
> Attachments: HDFS-3970.patch
>
>
> {code}// read attributes out of the VERSION file of previous directory
> DataStorage prevInfo = new DataStorage();
> prevInfo.readPreviousVersionProperties(bpSd);{code}
> In the above code snippet BlockPoolSliceStorage instance should be used. 
> other wise rollback results in 'storageType' property missing which will not 
> be there in initial VERSION file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4206) Change the fields in INode and its subclasses to private

2012-11-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500138#comment-13500138
 ] 

Hudson commented on HDFS-4206:
--

Integrated in Hadoop-Yarn-trunk #41 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/41/])
HDFS-4206. Change the fields in INode and its subclasses to private. 
(Revision 1410996)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1410996
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageSerialization.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryWithQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeSymlink.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java


> Change the fields in INode and its subclasses to private
> 
>
> Key: HDFS-4206
> URL: https://issues.apache.org/jira/browse/HDFS-4206
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Fix For: 3.0.0
>
> Attachments: h4206_20121116b.patch, h4206_20121116.patch
>
>
> The fields in INode and its subclasses are sometimes directly accessed and 
> modified by other code.  It is better to change them to private and use 
> getters/setters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4199) Provide test for HdfsVolumeId

2012-11-19 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HDFS-4199:
-

Attachment: HDFS-4199.patch

New version of the patch HDFS-4199: the comparison checking made more generic.

> Provide test for HdfsVolumeId
> -
>
> Key: HDFS-4199
> URL: https://issues.apache.org/jira/browse/HDFS-4199
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.0.2-alpha
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9053.patch, HDFS-4199.patch
>
>
> Provide test for HdfsVolumeId to improve the code coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4047) BPServiceActor has nested shouldRun loops

2012-11-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13500104#comment-13500104
 ] 

Hadoop QA commented on HDFS-4047:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12554133/HDFS-4047.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3540//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3540//console

This message is automatically generated.

> BPServiceActor has nested shouldRun loops
> -
>
> Key: HDFS-4047
> URL: https://issues.apache.org/jira/browse/HDFS-4047
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Eli Collins
>Priority: Minor
> Attachments: HDFS-4047.patch, HDFS-4047.patch, hdfs-4047.txt, 
> hdfs-4047.txt
>
>
> BPServiceActor#run and offerService booth have while shouldRun loops. We only 
> need the outer one, ie we can hoist the info log from offerService out to run 
> and remove the while loop.
> {code}
> BPServiceActor#run:
> while (shouldRun()) {
>   try {
> offerService();
>   } catch (Exception ex) {
> ...
> offerService:
> while (shouldRun()) {
>   try {
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4175) Extend existing TestSnapshot to support more complicated directory structure and more modifications on current files/direcotories

2012-11-19 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4175:


Attachment: HDFS-4175.003.patch

Suresh and Colin, thanks for the comments! New patch uploaded to address the 
comments.

> Extend existing TestSnapshot to support more complicated directory structure 
> and more modifications on current files/direcotories
> -
>
> Key: HDFS-4175
> URL: https://issues.apache.org/jira/browse/HDFS-4175
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node, name-node
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4175.003.patch, HDFS-4175-snapshot-test.002.patch, 
> snapshot-test.001.patch
>
>
> Currently the TestSnapshot only uses a static simple directory structure for 
> testing snapshot functionalities. We need to test snapshot under more 
> complicated directory structures. 
> Also the current TestSnapshot only compares snapshots before/after three 
> types of modifications: file creation, deletion, and appending. We need to 
> test snapshots for more file modifications, including file permission change, 
> replication factor change, owner/group change, and sub-directory 
> creation/deletion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira