[jira] [Commented] (HDFS-3979) Fix hsync and hflush semantics.

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475546#comment-13475546
 ] 

Hadoop QA commented on HDFS-3979:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12549002/hdfs-3979-v3.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3327//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3327//console

This message is automatically generated.

> Fix hsync and hflush semantics.
> ---
>
> Key: HDFS-3979
> URL: https://issues.apache.org/jira/browse/HDFS-3979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node, hdfs client
>Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt, hdfs-3979-v3.txt
>
>
> See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
> is not on a synchronous path from the DFSClient, hence it is possible that a 
> DN loses data that it has already acknowledged as persisted to a client.
> Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475540#comment-13475540
 ] 

Binglin Chang commented on HDFS-4046:
-

attach new patch fixing TestAuditLogs bug:
InputStream.read() returns int value >=0, so
assertTrue("failed to read from file", val > 0);
should change to:
assertTrue("failed to read from file", val >= 0);

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
> HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-4046:


Attachment: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
> HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475537#comment-13475537
 ] 

Binglin Chang commented on HDFS-4046:
-

Ops, looks like there is a bug in 
org.apache.hadoop.hdfs.server.namenode.TestAuditLogs...
Should I fix the bug in this patch, or just fire another JIRA?

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3979) Fix hsync and hflush semantics.

2012-10-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HDFS-3979:


Attachment: hdfs-3979-v3.txt

This little change makes TestHSync fail most of the time - without the rest of 
the patch, and never with this patch.

(In HDFS-744 I had avoided this race, by updating the sync metric first. I know 
that was a hack... By updating the metric last in BlockReceiver.flushOrSync, 
this race becomes apparent again).

We do have pipeline tests that seem to verify correct pipeline behavior in the 
face of failures via fault injection: TestFiPipelines and TestFiHFlush.

In terms of the API3/API4 discussion, I think we agree that hflush should 
follow API4, right? (otherwise we'd have unduly complex code)


> Fix hsync and hflush semantics.
> ---
>
> Key: HDFS-3979
> URL: https://issues.apache.org/jira/browse/HDFS-3979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node, hdfs client
>Affects Versions: 0.22.0, 0.23.0, 2.0.0-alpha
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Attachments: hdfs-3979-sketch.txt, hdfs-3979-v2.txt, hdfs-3979-v3.txt
>
>
> See discussion in HDFS-744. The actual sync/flush operation in BlockReceiver 
> is not on a synchronous path from the DFSClient, hence it is possible that a 
> DN loses data that it has already acknowledged as persisted to a client.
> Edit: Spelling.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4043) Namenode Kerberos Login does not use proper hostname for host qualified hdfs principal name.

2012-10-12 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475524#comment-13475524
 ] 

Brahma Reddy Battula commented on HDFS-4043:


Hi Ahad Rana,
I think,,this is same as HDFS-3980..Please refer following comment..
https://issues.apache.org/jira/browse/HDFS-3980?focusedCommentId=13469267&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13469267..

 Can I duplicate this...?
 
Please correct me If I am wrong..

> Namenode Kerberos Login does not use proper hostname for host qualified hdfs 
> principal name.
> 
>
> Key: HDFS-4043
> URL: https://issues.apache.org/jira/browse/HDFS-4043
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha, 2.0.3-alpha
> Environment: CDH4U1 on Ubuntu 12.04
>Reporter: Ahad Rana
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The Namenode uses the loginAsNameNodeUser method in NameNode.java to login 
> using the hdfs principal. This method in turn invokes SecurityUtil.login with 
> a hostname (last parameter) obtained via a call to InetAddress.getHostName. 
> This call does not always return the fully qualified host name, and thus 
> causes the namenode to login to fail due to kerberos's inability to find a 
> matching hdfs principal in the hdfs.keytab file. Instead it should use 
> InetAddress.getCanonicalHostName. This is consistent with what is used 
> internally by SecurityUtil.java to login in other services, such as the 
> DataNode. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475508#comment-13475508
 ] 

Hadoop QA commented on HDFS-4046:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12548992/HDFS-4046-ChecksumType-NULL.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestAuditLogs

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3326//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3326//console

This message is automatically generated.

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3990) NN's health report has severe performance problems

2012-10-12 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475497#comment-13475497
 ] 

Ravi Prakash commented on HDFS-3990:


I'm sorry I've been out of the loop, but why would caching be the solution? 
If we want to reassign the IP addresse to hostname for a single node, would it 
require a restart of the NN? Is there a timeout with the caching? Even with a 
timeout I would have my reservations.
Do nodes have hadoop generated unique IDs that we can leverage and match with 
IP addresses that we have cached?

> NN's health report has severe performance problems
> --
>
> Key: HDFS-3990
> URL: https://issues.apache.org/jira/browse/HDFS-3990
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3990.patch
>
>
> The dfshealth page will place a read lock on the namespace while it does a 
> dns lookup for every DN.  On a multi-thousand node cluster, this often 
> results in 10s+ load time for the health page.  10 concurrent requests were 
> found to cause 7m+ load times during which time write operations blocked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4009) WebHdfsFileSystem and HftpFileSystem don't need delegation tokens

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475489#comment-13475489
 ] 

Hadoop QA commented on HDFS-4009:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548986/hdfs-4009-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3324//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3324//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3324//console

This message is automatically generated.

> WebHdfsFileSystem and HftpFileSystem don't need delegation tokens
> -
>
> Key: HDFS-4009
> URL: https://issues.apache.org/jira/browse/HDFS-4009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Tom White
>Assignee: Karthik Kambatla
> Attachments: hadoop-8852.patch, hadoop-8852.patch, 
> hadoop-8852-v1.patch, hdfs-4009-v1.patch
>
>
> Parent JIRA to track the work of removing delegation tokens from these 
> filesystems. 
> This JIRA has evolved from the initial issue of these filesystems not 
> stopping the DelegationTokenRenewer thread they were creating.
> After further investigation, Daryn pointed out - "If you can get a token, you 
> don't need a token"! Hence, these filesystems shouldn't use delegation tokens.
> Evolution of the JIRA is listed below:
> Update 2:
> DelegationTokenRenewer is not required. The filesystems that are using it 
> already have Krb tickets and do not need tokens. Remove 
> DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
> filesystems.
> Update1:
> DelegationTokenRenewer should be Singleton - the instance and renewer threads 
> should be created/started lazily. The filesystems using the renewer shouldn't 
> need to explicity start/stop the renewer, and only register/de-register for 
> token renewal.
> Initial issue:
> HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
> thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-4046:


Status: Patch Available  (was: Open)

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-4046:


Attachment: HDFS-4046-ChecksumType-NULL.patch

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Attachments: HDFS-4046-ChecksumType-NULL.patch
>
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4009) WebHdfsFileSystem and HftpFileSystem don't need delegation tokens

2012-10-12 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-4009:
---

Status: Patch Available  (was: Reopened)

> WebHdfsFileSystem and HftpFileSystem don't need delegation tokens
> -
>
> Key: HDFS-4009
> URL: https://issues.apache.org/jira/browse/HDFS-4009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Tom White
>Assignee: Karthik Kambatla
> Attachments: hadoop-8852.patch, hadoop-8852.patch, 
> hadoop-8852-v1.patch, hdfs-4009-v1.patch
>
>
> Parent JIRA to track the work of removing delegation tokens from these 
> filesystems. 
> This JIRA has evolved from the initial issue of these filesystems not 
> stopping the DelegationTokenRenewer thread they were creating.
> After further investigation, Daryn pointed out - "If you can get a token, you 
> don't need a token"! Hence, these filesystems shouldn't use delegation tokens.
> Evolution of the JIRA is listed below:
> Update 2:
> DelegationTokenRenewer is not required. The filesystems that are using it 
> already have Krb tickets and do not need tokens. Remove 
> DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
> filesystems.
> Update1:
> DelegationTokenRenewer should be Singleton - the instance and renewer threads 
> should be created/started lazily. The filesystems using the renewer shouldn't 
> need to explicity start/stop the renewer, and only register/de-register for 
> token renewal.
> Initial issue:
> HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
> thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira



[jira] [Updated] (HDFS-4009) WebHdfsFileSystem and HftpFileSystem don't need delegation tokens

2012-10-12 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-4009:
---

Attachment: hdfs-4009-v1.patch

Uploading patch addressing Daryn's and Owen's comments.

The current version makes DelegationTokenRenewer a singleton, and the 
filesystems register/de-register.

Firing off Jenkins to see if it throws any findbugs warnings.

> WebHdfsFileSystem and HftpFileSystem don't need delegation tokens
> -
>
> Key: HDFS-4009
> URL: https://issues.apache.org/jira/browse/HDFS-4009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Tom White
>Assignee: Karthik Kambatla
> Attachments: hadoop-8852.patch, hadoop-8852.patch, 
> hadoop-8852-v1.patch, hdfs-4009-v1.patch
>
>
> Parent JIRA to track the work of removing delegation tokens from these 
> filesystems. 
> This JIRA has evolved from the initial issue of these filesystems not 
> stopping the DelegationTokenRenewer thread they were creating.
> After further investigation, Daryn pointed out - "If you can get a token, you 
> don't need a token"! Hence, these filesystems shouldn't use delegation tokens.
> Evolution of the JIRA is listed below:
> Update 2:
> DelegationTokenRenewer is not required. The filesystems that are using it 
> already have Krb tickets and do not need tokens. Remove 
> DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
> filesystems.
> Update1:
> DelegationTokenRenewer should be Singleton - the instance and renewer threads 
> should be created/started lazily. The filesystems using the renewer shouldn't 
> need to explicity start/stop the renewer, and only register/de-register for 
> token renewal.
> Initial issue:
> HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
> thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-3669) 'mvn clean' should delete native build directories

2012-10-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-3669.


Resolution: Fixed

"mvn clean" deletes the "target" directories, which contain the native build 
directories.

> 'mvn clean' should delete native build directories
> --
>
> Key: HDFS-3669
> URL: https://issues.apache.org/jira/browse/HDFS-3669
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Running maven clean (mvn clean) should delete native build directories.  This 
> is useful in cases where you need to re-run cmake from scratch.  One example 
> of a case like this is where the set of installed system libraries have 
> changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HDFS-3669) 'mvn clean' should delete native build directories

2012-10-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-3669 started by Colin Patrick McCabe.

> 'mvn clean' should delete native build directories
> --
>
> Key: HDFS-3669
> URL: https://issues.apache.org/jira/browse/HDFS-3669
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> Running maven clean (mvn clean) should delete native build directories.  This 
> is useful in cases where you need to re-run cmake from scratch.  One example 
> of a case like this is where the set of installed system libraries have 
> changed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3990) NN's health report has severe performance problems

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475370#comment-13475370
 ] 

Hadoop QA commented on HDFS-3990:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548945/HDFS-3990.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3323//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3323//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3323//console

This message is automatically generated.

> NN's health report has severe performance problems
> --
>
> Key: HDFS-3990
> URL: https://issues.apache.org/jira/browse/HDFS-3990
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3990.patch
>
>
> The dfshealth page will place a read lock on the namespace while it does a 
> dns lookup for every DN.  On a multi-thousand node cluster, this often 
> results in 10s+ load time for the health page.  10 concurrent requests were 
> found to cause 7m+ load times during which time write operations blocked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3540) Further improvement on recovery mode and edit log toleration in branch-1

2012-10-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475315#comment-13475315
 ] 

Colin Patrick McCabe commented on HDFS-3540:


Looks good to me.

> Further improvement on recovery mode and edit log toleration in branch-1
> 
>
> Key: HDFS-3540
> URL: https://issues.apache.org/jira/browse/HDFS-3540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 1.2.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3540_20120925.patch, h3540_20120926.patch, 
> h3540_20120927.patch, h3540_20121009.patch, HDFS-3540-b1.004.patch
>
>
> *Recovery Mode*: HDFS-3479 backported HDFS-3335 to branch-1.  However, the 
> recovery mode feature in branch-1 is dramatically different from the recovery 
> mode in trunk since the edit log implementations in these two branch are 
> different.  For example, there is UNCHECKED_REGION_LENGTH in branch-1 but not 
> in trunk.
> *Edit Log Toleration*: HDFS-3521 added this feature to branch-1 to remedy 
> UNCHECKED_REGION_LENGTH and to tolerate edit log corruption.
> There are overlaps between these two features.  We study potential further 
> improvement in this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3990) NN's health report has severe performance problems

2012-10-12 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3990:
--

Target Version/s: 2.0.2-alpha, 0.23.4, 3.0.0  (was: 0.23.4, 3.0.0, 
2.0.2-alpha)
  Status: Patch Available  (was: Open)

> NN's health report has severe performance problems
> --
>
> Key: HDFS-3990
> URL: https://issues.apache.org/jira/browse/HDFS-3990
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 2.0.0-alpha, 0.23.0, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3990.patch
>
>
> The dfshealth page will place a read lock on the namespace while it does a 
> dns lookup for every DN.  On a multi-thousand node cluster, this often 
> results in 10s+ load time for the health page.  10 concurrent requests were 
> found to cause 7m+ load times during which time write operations blocked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3990) NN's health report has severe performance problems

2012-10-12 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3990:
--

Attachment: HDFS-3990.patch

This is an incremental improvement that caches resolved datanode addrs to 
prevent multiple unnecessary dns lookups.  It actually affects more than just 
the dfshealth page so this should provide much improved performance. 

I will file another jira for investigating how to get the web page execution 
out of the namesystem lock.

> NN's health report has severe performance problems
> --
>
> Key: HDFS-3990
> URL: https://issues.apache.org/jira/browse/HDFS-3990
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3990.patch
>
>
> The dfshealth page will place a read lock on the namespace while it does a 
> dns lookup for every DN.  On a multi-thousand node cluster, this often 
> results in 10s+ load time for the health page.  10 concurrent requests were 
> found to cause 7m+ load times during which time write operations blocked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4047) BPServiceActor has nested shouldRun loops

2012-10-12 Thread Eli Collins (JIRA)
Eli Collins created HDFS-4047:
-

 Summary: BPServiceActor has nested shouldRun loops
 Key: HDFS-4047
 URL: https://issues.apache.org/jira/browse/HDFS-4047
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor


BPServiceActor#run and offerService booth have while shouldRun loops. We only 
need the outer one, ie we can hoist the info log from offerService out to run 
and remove the while loop.

{code}
BPServiceActor#run:

while (shouldRun()) {
  try {
offerService();
  } catch (Exception ex) {
...

offerService:

while (shouldRun()) {
  try {
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4045) SecondaryNameNode cannot read from QuorumJournal URI

2012-10-12 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson reassigned HDFS-4045:
---

Assignee: Andy Isaacson

> SecondaryNameNode cannot read from QuorumJournal URI
> 
>
> Key: HDFS-4045
> URL: https://issues.apache.org/jira/browse/HDFS-4045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Vinithra Varadharajan
>Assignee: Andy Isaacson
>
> If HDFS is set up in basic mode (non-HA) with QuorumJournal, and the 
> dfs.namenode.edits.dir is set to only the QuorumJournal URI and no local dir, 
> the SecondaryNameNode is unable to do a checkpoint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3077) Quorum-based protocol for reading and writing edit logs

2012-10-12 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3077:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> Quorum-based protocol for reading and writing edit logs
> ---
>
> Key: HDFS-3077
> URL: https://issues.apache.org/jira/browse/HDFS-3077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 3.0.0, QuorumJournalManager (HDFS-3077)
>
> Attachments: hdfs-3077-partial.txt, hdfs-3077-test-merge.txt, 
> hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, 
> hdfs-3077.txt, hdfs-3077.txt, qjournal-design.pdf, qjournal-design.pdf, 
> qjournal-design.pdf, qjournal-design.pdf, qjournal-design.pdf, 
> qjournal-design.pdf, qjournal-design.tex, qjournal-design.tex
>
>
> Currently, one of the weak points of the HA design is that it relies on 
> shared storage such as an NFS filer for the shared edit log. One alternative 
> that has been proposed is to depend on BookKeeper, a ZooKeeper subproject 
> which provides a highly available replicated edit log on commodity hardware. 
> This JIRA is to implement another alternative, based on a quorum commit 
> protocol, integrated more tightly in HDFS and with the requirements driven 
> only by HDFS's needs rather than more generic use cases. More details to 
> follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4009) WebHdfsFileSystem and HftpFileSystem don't need delegation tokens

2012-10-12 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475169#comment-13475169
 ] 

Karthik Kambatla commented on HDFS-4009:


Thanks Daryn and Owen. 

The last patch (hadoop-8852-v1.patch dated 9/27) addresses your suggestions. 
There seem to be findbugs warnings an d test issues - I ll try to fix them ASAP 
and post a clean patch.


> WebHdfsFileSystem and HftpFileSystem don't need delegation tokens
> -
>
> Key: HDFS-4009
> URL: https://issues.apache.org/jira/browse/HDFS-4009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Tom White
>Assignee: Karthik Kambatla
> Attachments: hadoop-8852.patch, hadoop-8852.patch, 
> hadoop-8852-v1.patch
>
>
> Parent JIRA to track the work of removing delegation tokens from these 
> filesystems. 
> This JIRA has evolved from the initial issue of these filesystems not 
> stopping the DelegationTokenRenewer thread they were creating.
> After further investigation, Daryn pointed out - "If you can get a token, you 
> don't need a token"! Hence, these filesystems shouldn't use delegation tokens.
> Evolution of the JIRA is listed below:
> Update 2:
> DelegationTokenRenewer is not required. The filesystems that are using it 
> already have Krb tickets and do not need tokens. Remove 
> DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
> filesystems.
> Update1:
> DelegationTokenRenewer should be Singleton - the instance and renewer threads 
> should be created/started lazily. The filesystems using the renewer shouldn't 
> need to explicity start/stop the renewer, and only register/de-register for 
> token renewal.
> Initial issue:
> HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
> thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HDFS-4010) Remove unused TokenRenewer implementation from WebHdfsFileSystem and HftpFileSystem

2012-10-12 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley closed HDFS-4010.
---


> Remove unused TokenRenewer implementation from WebHdfsFileSystem and 
> HftpFileSystem
> ---
>
> Key: HDFS-4010
> URL: https://issues.apache.org/jira/browse/HDFS-4010
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: HDFS-4010.patch
>
>
> WebHdfsFileSystem and HftpFileSystem implement TokenRenewer without using 
> anywhere.
> As we are in the process of migrating them to not use tokens, this code 
> should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4009) WebHdfsFileSystem and HftpFileSystem don't need delegation tokens

2012-10-12 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475161#comment-13475161
 ] 

Owen O'Malley commented on HDFS-4009:
-

That looks like a right approach, Daryn. Just use a singleton thread for 
renewing and mark it as a daemon thread. Add tokens the queue for renewing and 
delete them when the filesystem is closed.

> WebHdfsFileSystem and HftpFileSystem don't need delegation tokens
> -
>
> Key: HDFS-4009
> URL: https://issues.apache.org/jira/browse/HDFS-4009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Tom White
>Assignee: Karthik Kambatla
> Attachments: hadoop-8852.patch, hadoop-8852.patch, 
> hadoop-8852-v1.patch
>
>
> Parent JIRA to track the work of removing delegation tokens from these 
> filesystems. 
> This JIRA has evolved from the initial issue of these filesystems not 
> stopping the DelegationTokenRenewer thread they were creating.
> After further investigation, Daryn pointed out - "If you can get a token, you 
> don't need a token"! Hence, these filesystems shouldn't use delegation tokens.
> Evolution of the JIRA is listed below:
> Update 2:
> DelegationTokenRenewer is not required. The filesystems that are using it 
> already have Krb tickets and do not need tokens. Remove 
> DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
> filesystems.
> Update1:
> DelegationTokenRenewer should be Singleton - the instance and renewer threads 
> should be created/started lazily. The filesystems using the renewer shouldn't 
> need to explicity start/stop the renewer, and only register/de-register for 
> token renewal.
> Initial issue:
> HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
> thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4046:
-

Target Version/s: 3.0.0, 2.0.3-alpha, 0.23.5

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475123#comment-13475123
 ] 

Kihwal Lee commented on HDFS-4046:
--

Binglin, thanks for finding this. I will review the patch once you upload one.

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Priority: Minor
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-4046:


Assignee: Binglin Chang

> ChecksumTypeProto use NULL as enum value which is illegal in C/C++
> --
>
> Key: HDFS-4046
> URL: https://issues.apache.org/jira/browse/HDFS-4046
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
>
> I tried to write a native hdfs client using protobuf based protocol, when I 
> generate c++ code using hdfs.proto, the generated file can not compile, 
> because NULL is an already defined macro.
> I am thinking two solutions:
> 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
> fine for all languages, but this may breaking compatibility.
> 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
> enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
> and DataChecksum.Type, and make sure enum integer values are match(currently 
> already match).
> I can make a patch for solution 2.
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-12 Thread Binglin Chang (JIRA)
Binglin Chang created HDFS-4046:
---

 Summary: ChecksumTypeProto use NULL as enum value which is illegal 
in C/C++
 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Priority: Minor


I tried to write a native hdfs client using protobuf based protocol, when I 
generate c++ code using hdfs.proto, the generated file can not compile, because 
NULL is an already defined macro.
I am thinking two solutions:
1. refactor all DataChecksum.Type.NULL references to NONE, which should be fine 
for all languages, but this may breaking compatibility.
2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use enum 
integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto and 
DataChecksum.Type, and make sure enum integer values are match(currently 
already match).
I can make a patch for solution 2.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475074#comment-13475074
 ] 

Hudson commented on HDFS-4044:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #2878 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2878/])
HDFS-4044. Duplicate ChecksumType definition in HDFS .proto files. 
Contributed by Binglin Chang. (Revision 1397580)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397580
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto


> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.0-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4022) Replication not happening for appended block

2012-10-12 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475035#comment-13475035
 ] 

Uma Maheswara Rao G commented on HDFS-4022:
---

Thanks Vinay for the patch!

+1 for the patch.

@Nicholas, do you have any comments on the patch? I wanted to take your opinion 
as well before committing, as it is touching the append and replications core 
flows.

> Replication not happening for appended block
> 
>
> Key: HDFS-4022
> URL: https://issues.apache.org/jira/browse/HDFS-4022
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>Priority: Blocker
> Attachments: HDFS-4022.patch, HDFS-4022.patch, HDFS-4022.patch
>
>
> Block written and finalized
> Later append called. Block GenTS got changed.
> DN side log 
> "Can't send invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" 
> logged continously
> NN side log
> "INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Error report from 
> DatanodeRegistration(192.xx.xx.xx, 
> storageID=DS-2040532042-192.xx.xx.xx-50010-1348830863443, infoPort=50075, 
> ipcPort=50020, storageInfo=lv=-40;cid=123456;nsid=116596173;c=0): Can't send 
> invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" also 
> logged continuosly.
> The block checked for tansfer is the one with old genTS whereas the new block 
> with updated genTS exist in the data dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475030#comment-13475030
 ] 

Hudson commented on HDFS-4044:
--

Integrated in Hadoop-Common-trunk-Commit #2855 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2855/])
HDFS-4044. Duplicate ChecksumType definition in HDFS .proto files. 
Contributed by Binglin Chang. (Revision 1397580)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397580
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto


> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.0-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475028#comment-13475028
 ] 

Hudson commented on HDFS-4044:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2917 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2917/])
HDFS-4044. Duplicate ChecksumType definition in HDFS .proto files. 
Contributed by Binglin Chang. (Revision 1397580)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397580
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/datatransfer.proto


> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.0-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4044:
--

  Component/s: data-node
Affects Version/s: 2.0.0-alpha

> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.0-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4044:
--

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to both trunk and branch-2. Thank you Binglin.

> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 2.0.0-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Fix For: 2.0.3-alpha
>
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475010#comment-13475010
 ] 

Suresh Srinivas commented on HDFS-4044:
---

bq. This patch is just code refactor, so I don't think it need adding new 
testcase.
Agree.

I will commit the patch shortly.

> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475002#comment-13475002
 ] 

Hudson commented on HDFS-3912:
--

Integrated in Hadoop-Mapreduce-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1224/])
HDFS-3912. Detect and avoid stale datanodes for writes. Contributed by Jing 
Zhao (Revision 1397211)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397211
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSClusterStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch, HDFS-3912.005.patch, 
> HDFS-3912.006.patch, HDFS-3912.007.patch, HDFS-3912.008.patch, 
> HDFS-3912.009.patch, HDFS-3912-010.patch, HDFS-3912-branch-1.1-001.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4041) Hadoop HDFS Maven protoc calls must not depend on external sh script

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474999#comment-13474999
 ] 

Hudson commented on HDFS-4041:
--

Integrated in Hadoop-Mapreduce-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1224/])
HDFS-4041. Hadoop HDFS Maven protoc calls must not depend on external sh 
script. Contributed by Chris Nauroth. (Revision 1397362)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397362
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml


> Hadoop HDFS Maven protoc calls must not depend on external sh script
> 
>
> Key: HDFS-4041
> URL: https://issues.apache.org/jira/browse/HDFS-4041
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HDFS-4041-branch-2.patch, HDFS-4041.patch
>
>
> Currently, several pom.xml files rely on external shell scripting to call 
> protoc.  The sh binary may not be available on all developers' machines (e.g. 
> Windows without Cygwin).  This issue tracks removal of that dependency in 
> Hadoop HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3077) Quorum-based protocol for reading and writing edit logs

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474996#comment-13474996
 ] 

Hudson commented on HDFS-3077:
--

Integrated in Hadoop-Mapreduce-trunk #1224 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1224/])
Merge CHANGES for HDFS-3077 into the main CHANGES.txt file (Revision 
1397352)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397352
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-3077.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Quorum-based protocol for reading and writing edit logs
> ---
>
> Key: HDFS-3077
> URL: https://issues.apache.org/jira/browse/HDFS-3077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: QuorumJournalManager (HDFS-3077)
>
> Attachments: hdfs-3077-partial.txt, hdfs-3077-test-merge.txt, 
> hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, 
> hdfs-3077.txt, hdfs-3077.txt, qjournal-design.pdf, qjournal-design.pdf, 
> qjournal-design.pdf, qjournal-design.pdf, qjournal-design.pdf, 
> qjournal-design.pdf, qjournal-design.tex, qjournal-design.tex
>
>
> Currently, one of the weak points of the HA design is that it relies on 
> shared storage such as an NFS filer for the shared edit log. One alternative 
> that has been proposed is to depend on BookKeeper, a ZooKeeper subproject 
> which provides a highly available replicated edit log on commodity hardware. 
> This JIRA is to implement another alternative, based on a quorum commit 
> protocol, integrated more tightly in HDFS and with the requirements driven 
> only by HDFS's needs rather than more generic use cases. More details to 
> follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4022) Replication not happening for appended block

2012-10-12 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-4022:


Attachment: HDFS-4022.patch

Uploading latest correct patch. Please review.. 

> Replication not happening for appended block
> 
>
> Key: HDFS-4022
> URL: https://issues.apache.org/jira/browse/HDFS-4022
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>Priority: Blocker
> Attachments: HDFS-4022.patch, HDFS-4022.patch, HDFS-4022.patch
>
>
> Block written and finalized
> Later append called. Block GenTS got changed.
> DN side log 
> "Can't send invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" 
> logged continously
> NN side log
> "INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Error report from 
> DatanodeRegistration(192.xx.xx.xx, 
> storageID=DS-2040532042-192.xx.xx.xx-50010-1348830863443, infoPort=50075, 
> ipcPort=50020, storageInfo=lv=-40;cid=123456;nsid=116596173;c=0): Can't send 
> invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" also 
> logged continuosly.
> The block checked for tansfer is the one with old genTS whereas the new block 
> with updated genTS exist in the data dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4022) Replication not happening for appended block

2012-10-12 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474988#comment-13474988
 ] 

Vinay commented on HDFS-4022:
-

Oops!! I uploaded the wrong patch..

> Replication not happening for appended block
> 
>
> Key: HDFS-4022
> URL: https://issues.apache.org/jira/browse/HDFS-4022
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>Priority: Blocker
> Attachments: HDFS-4022.patch, HDFS-4022.patch, HDFS-4022.patch
>
>
> Block written and finalized
> Later append called. Block GenTS got changed.
> DN side log 
> "Can't send invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" 
> logged continously
> NN side log
> "INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Error report from 
> DatanodeRegistration(192.xx.xx.xx, 
> storageID=DS-2040532042-192.xx.xx.xx-50010-1348830863443, infoPort=50075, 
> ipcPort=50020, storageInfo=lv=-40;cid=123456;nsid=116596173;c=0): Can't send 
> invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" also 
> logged continuosly.
> The block checked for tansfer is the one with old genTS whereas the new block 
> with updated genTS exist in the data dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4042) send Cache-Control header on JSP pages

2012-10-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474985#comment-13474985
 ] 

Steve Loughran commented on HDFS-4042:
--

Looks like a duplicate of https://issues.apache.org/jira/browse/HADOOP-6607 

Big issue: how to fix: per servlet or with a filter -and if a filter is used, 
what impact does that have on things like webhdfs.

Note that it's traditional to add some other headers, HADOOP-6607 states them; 
the expires: flag ensures that it isn't cached on the client either, the other 
two to stop proxies of various sorts from caching them

> send Cache-Control header on JSP pages
> --
>
> Key: HDFS-4042
> URL: https://issues.apache.org/jira/browse/HDFS-4042
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node, name-node
>Affects Versions: 2.0.2-alpha
>Reporter: Andy Isaacson
>Priority: Minor
>
> We should send a Cache-Control header on JSP pages so that HTTP/1.1 compliant 
> caches can properly manage cached data.
> Currently our JSPs send:
> {noformat}
> % curl -v http://nn1:50070/dfshealth.jsp
> ...
> < HTTP/1.1 200 OK
> < Content-Type: text/html; charset=utf-8
> < Expires: Thu, 01-Jan-1970 00:00:00 GMT
> < Set-Cookie: JSESSIONID=xtblchjm7o7j1y1f33r0mpmqp;Path=/
> < Content-Length: 3651
> < Server: Jetty(6.1.26)
> {noformat}
> Based on a quick reading of RFC 2616 
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html I think we want to 
> send {{Cache-Control: private, no-cache}} but I could be wrong.  The Jetty 
> docs http://docs.codehaus.org/display/JETTY/LastModifiedCacheControl indicate 
> this is fairly straightforward.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4009) WebHdfsFileSystem and HftpFileSystem don't need delegation tokens

2012-10-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474982#comment-13474982
 ] 

Daryn Sharp commented on HDFS-4009:
---

I'm not sure anything more complicated is needed than what it is currently 
there.  Each token has its own independent expiration time, so batching them up 
doesn't seem to be of value esp. since there should generally be only one.  
Each token also knows how to renew itself, so bundling it up with its renewer 
also seems unnecessary.

I think it's back to the renewer is a singleton.  Hftp/Webhdfs register their 
token on init, and cancel/unregister on close.  I'd lazy start the renewer when 
a token is registered, and shut it down when the last one is removed.

> WebHdfsFileSystem and HftpFileSystem don't need delegation tokens
> -
>
> Key: HDFS-4009
> URL: https://issues.apache.org/jira/browse/HDFS-4009
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Tom White
>Assignee: Karthik Kambatla
> Attachments: hadoop-8852.patch, hadoop-8852.patch, 
> hadoop-8852-v1.patch
>
>
> Parent JIRA to track the work of removing delegation tokens from these 
> filesystems. 
> This JIRA has evolved from the initial issue of these filesystems not 
> stopping the DelegationTokenRenewer thread they were creating.
> After further investigation, Daryn pointed out - "If you can get a token, you 
> don't need a token"! Hence, these filesystems shouldn't use delegation tokens.
> Evolution of the JIRA is listed below:
> Update 2:
> DelegationTokenRenewer is not required. The filesystems that are using it 
> already have Krb tickets and do not need tokens. Remove 
> DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
> filesystems.
> Update1:
> DelegationTokenRenewer should be Singleton - the instance and renewer threads 
> should be created/started lazily. The filesystems using the renewer shouldn't 
> need to explicity start/stop the renewer, and only register/de-register for 
> token renewal.
> Initial issue:
> HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
> thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3912) Detecting and avoiding stale datanodes for writing

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474976#comment-13474976
 ] 

Hudson commented on HDFS-3912:
--

Integrated in Hadoop-Hdfs-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1193/])
HDFS-3912. Detect and avoid stale datanodes for writes. Contributed by Jing 
Zhao (Revision 1397211)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397211
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSClusterStats.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java


> Detecting and avoiding stale datanodes for writing
> --
>
> Key: HDFS-3912
> URL: https://issues.apache.org/jira/browse/HDFS-3912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-3912.001.patch, HDFS-3912.002.patch, 
> HDFS-3912.003.patch, HDFS-3912.004.patch, HDFS-3912.005.patch, 
> HDFS-3912.006.patch, HDFS-3912.007.patch, HDFS-3912.008.patch, 
> HDFS-3912.009.patch, HDFS-3912-010.patch, HDFS-3912-branch-1.1-001.patch
>
>
> 1. Make stale timeout adaptive to the number of nodes marked stale in the 
> cluster.
> 2. Consider having a separate configuration for write skipping the stale 
> nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4041) Hadoop HDFS Maven protoc calls must not depend on external sh script

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474973#comment-13474973
 ] 

Hudson commented on HDFS-4041:
--

Integrated in Hadoop-Hdfs-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1193/])
HDFS-4041. Hadoop HDFS Maven protoc calls must not depend on external sh 
script. Contributed by Chris Nauroth. (Revision 1397362)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397362
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml


> Hadoop HDFS Maven protoc calls must not depend on external sh script
> 
>
> Key: HDFS-4041
> URL: https://issues.apache.org/jira/browse/HDFS-4041
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HDFS-4041-branch-2.patch, HDFS-4041.patch
>
>
> Currently, several pom.xml files rely on external shell scripting to call 
> protoc.  The sh binary may not be available on all developers' machines (e.g. 
> Windows without Cygwin).  This issue tracks removal of that dependency in 
> Hadoop HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3077) Quorum-based protocol for reading and writing edit logs

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474970#comment-13474970
 ] 

Hudson commented on HDFS-3077:
--

Integrated in Hadoop-Hdfs-trunk #1193 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1193/])
Merge CHANGES for HDFS-3077 into the main CHANGES.txt file (Revision 
1397352)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397352
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-3077.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Quorum-based protocol for reading and writing edit logs
> ---
>
> Key: HDFS-3077
> URL: https://issues.apache.org/jira/browse/HDFS-3077
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: QuorumJournalManager (HDFS-3077)
>
> Attachments: hdfs-3077-partial.txt, hdfs-3077-test-merge.txt, 
> hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, 
> hdfs-3077.txt, hdfs-3077.txt, qjournal-design.pdf, qjournal-design.pdf, 
> qjournal-design.pdf, qjournal-design.pdf, qjournal-design.pdf, 
> qjournal-design.pdf, qjournal-design.tex, qjournal-design.tex
>
>
> Currently, one of the weak points of the HA design is that it relies on 
> shared storage such as an NFS filer for the shared edit log. One alternative 
> that has been proposed is to depend on BookKeeper, a ZooKeeper subproject 
> which provides a highly available replicated edit log on commodity hardware. 
> This JIRA is to implement another alternative, based on a quorum commit 
> protocol, integrated more tightly in HDFS and with the requirements driven 
> only by HDFS's needs rather than more generic use cases. More details to 
> follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4022) Replication not happening for appended block

2012-10-12 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474967#comment-13474967
 ] 

Uma Maheswara Rao G commented on HDFS-4022:
---

{code}
 // Start a new datanode
+  cluster.startDataNodes(conf, 1, true, null, null);
+
+  // Append to the file.
+  FSDataOutputStream append = fileSystem.append(f);
+  append.write("/testAppend".getBytes());
+  append.close();
+

{code}
you need to start DN after append and close.

> Replication not happening for appended block
> 
>
> Key: HDFS-4022
> URL: https://issues.apache.org/jira/browse/HDFS-4022
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>Priority: Blocker
> Attachments: HDFS-4022.patch, HDFS-4022.patch
>
>
> Block written and finalized
> Later append called. Block GenTS got changed.
> DN side log 
> "Can't send invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" 
> logged continously
> NN side log
> "INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Error report from 
> DatanodeRegistration(192.xx.xx.xx, 
> storageID=DS-2040532042-192.xx.xx.xx-50010-1348830863443, infoPort=50075, 
> ipcPort=50020, storageInfo=lv=-40;cid=123456;nsid=116596173;c=0): Can't send 
> invalid block 
> BP-407900822-192.xx.xx.xx-1348830837061:blk_-9185630731157263852_108738" also 
> logged continuosly.
> The block checked for tansfer is the one with old genTS whereas the new block 
> with updated genTS exist in the data dir.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3224) Bug in check for DN re-registration with different storage ID

2012-10-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474965#comment-13474965
 ] 

Hudson commented on HDFS-3224:
--

Integrated in Hadoop-Hdfs-0.23-Build #402 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/402/])
HDFS-3224. Bug in check for DN re-registration with different storage ID. 
Contributed by Jason Lowe (Revision 1397100)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1397100
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDatanodeRegistration.java


> Bug in check for DN re-registration with different storage ID
> -
>
> Key: HDFS-3224
> URL: https://issues.apache.org/jira/browse/HDFS-3224
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eli Collins
>Assignee: Jason Lowe
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.5
>
> Attachments: HDFS-3224-branch0.23.patch, HDFS-3224.patch, 
> HDFS-3224.patch, HDFS-3224.patch, HDFS-3224.patch
>
>
> DatanodeManager#registerDatanode checks the host to node map using an IP:port 
> key, however the map is keyed on IP, so this check will always fail. It's 
> performing the check to determine if a DN with the same IP and storage ID has 
> already registered, and if so to remove this DN from the map and indicate 
> that eg it's no longer hosting these blocks. This bug has been here forever.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474891#comment-13474891
 ] 

Binglin Chang commented on HDFS-4044:
-

@Suresh Thanks for the review.
This patch is just code refactor, so I don't think it need adding new testcase.

> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4044) Duplicate ChecksumType definition in HDFS .proto files

2012-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474887#comment-13474887
 ] 

Hadoop QA commented on HDFS-4044:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12548859/HDFS-4044-checksumtype.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3321//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3321//console

This message is automatically generated.

> Duplicate ChecksumType definition in HDFS .proto files
> --
>
> Key: HDFS-4044
> URL: https://issues.apache.org/jira/browse/HDFS-4044
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Binglin Chang
>Assignee: Binglin Chang
> Attachments: HDFS-4044-checksumtype.patch
>
>
> Both hdfs.proto and datatransfer.proto define ChecksumType enum, 
> datatransfer.proto already includes hdfs.proto, so it should reuse 
> ChecksumTypeProto enum.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4045) SecondaryNameNode cannot read from QuorumJournal URI

2012-10-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474862#comment-13474862
 ] 

Todd Lipcon commented on HDFS-4045:
---

The issue is that the GetImageServlet doesn't yet know about generalized 
journal managers. We need to switch it over to use journal manager APIs to find 
the edits and stream them out.

> SecondaryNameNode cannot read from QuorumJournal URI
> 
>
> Key: HDFS-4045
> URL: https://issues.apache.org/jira/browse/HDFS-4045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Vinithra Varadharajan
>
> If HDFS is set up in basic mode (non-HA) with QuorumJournal, and the 
> dfs.namenode.edits.dir is set to only the QuorumJournal URI and no local dir, 
> the SecondaryNameNode is unable to do a checkpoint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4045) SecondaryNameNode cannot read from QuorumJournal URI

2012-10-12 Thread Vinithra Varadharajan (JIRA)
Vinithra Varadharajan created HDFS-4045:
---

 Summary: SecondaryNameNode cannot read from QuorumJournal URI
 Key: HDFS-4045
 URL: https://issues.apache.org/jira/browse/HDFS-4045
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Vinithra Varadharajan


If HDFS is set up in basic mode (non-HA) with QuorumJournal, and the 
dfs.namenode.edits.dir is set to only the QuorumJournal URI and no local dir, 
the SecondaryNameNode is unable to do a checkpoint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4045) SecondaryNameNode cannot read from QuorumJournal URI

2012-10-12 Thread Vinithra Varadharajan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13474861#comment-13474861
 ] 

Vinithra Varadharajan commented on HDFS-4045:
-

Exception in SecondaryNameNode logs:

11:58:21.069 PM ERROR   
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
Exception in doCheckpoint
java.io.IOException: Found no edit logs to download on NN since txid 1374
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:361)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:465)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:331)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:298)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:452)
at 
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:294)
at java.lang.Thread.run(Thread.java:662)

> SecondaryNameNode cannot read from QuorumJournal URI
> 
>
> Key: HDFS-4045
> URL: https://issues.apache.org/jira/browse/HDFS-4045
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 3.0.0
>Reporter: Vinithra Varadharajan
>
> If HDFS is set up in basic mode (non-HA) with QuorumJournal, and the 
> dfs.namenode.edits.dir is set to only the QuorumJournal URI and no local dir, 
> the SecondaryNameNode is unable to do a checkpoint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira