[jira] Created: (HDFS-1074) TestProxyUtil fails

2010-04-02 Thread Tsz Wo (Nicholas), SZE (JIRA)
TestProxyUtil fails
---

 Key: HDFS-1074
 URL: https://issues.apache.org/jira/browse/HDFS-1074
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: contrib/hdfsproxy
Reporter: Tsz Wo (Nicholas), SZE


TestProxyUtil failed a few Hudson builds, including 
[#289|http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/289/testReport/org.apache.hadoop.hdfsproxy/TestProxyUtil/testSendCommand/],
 
[#287|http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/289/testReport/org.apache.hadoop.hdfsproxy/TestProxyUtil/testSendCommand/],
 etc.
{noformat}
junit.framework.AssertionFailedError: null
at 
org.apache.hadoop.hdfsproxy.TestProxyUtil.testSendCommand(TestProxyUtil.java:43)
{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[VOTE] Commit hdfs-1024 to 0.20 branch

2010-04-02 Thread Stack
Please on committing HDFS-1024 to the hadoop 0.20 branch.

Background:

HDFS-1024 fixes possible trashing of fsimage because of failed copy
from 2NN and NN.  Ordinarily, possible corruption of this proportion
would merit commit w/o need of a vote only Dhruba correctly notes that
UNLESS both NN and 2NN are upgraded, HDFS-1024 becomes an incompatible
change (the NN-2NN communication will fail always).  IMO, this
incompatible change can be plastered over with a release note; e.g.
WARNING, you MUST update NN and 2NN when you go to 0.20.3 hadoop.  If
you agree with me, please vote +1 on commit.

Thanks,
St.Ack


Re: [VOTE] Commit hdfs-1024 to 0.20 branch

2010-04-02 Thread Todd Lipcon
On Fri, Apr 2, 2010 at 10:38 AM, Stack st...@duboce.net wrote:

 Please on committing HDFS-1024 to the hadoop 0.20 branch.

 Background:

 HDFS-1024 fixes possible trashing of fsimage because of failed copy
 from 2NN and NN.  Ordinarily, possible corruption of this proportion
 would merit commit w/o need of a vote only Dhruba correctly notes that
 UNLESS both NN and 2NN are upgraded, HDFS-1024 becomes an incompatible
 change (the NN-2NN communication will fail always).  IMO, this
 incompatible change can be plastered over with a release note; e.g.
 WARNING, you MUST update NN and 2NN when you go to 0.20.3 hadoop.  If
 you agree with me, please vote +1 on commit.


+1. If I recall correctly the NN and 2NN already do a very strict version
check in branch 20, so it's not any more incompatible than any other change.
(I think Dhruba made the version check less strict in the FB branch)

-Todd



-- 
Todd Lipcon
Software Engineer, Cloudera


[jira] Created: (HDFS-1076) HFDS CLI error tests fail with Avro RPC

2010-04-02 Thread Doug Cutting (JIRA)
HFDS CLI error tests fail with Avro RPC
---

 Key: HDFS-1076
 URL: https://issues.apache.org/jira/browse/HDFS-1076
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Doug Cutting
Assignee: Doug Cutting


Some HDFS command-line tests (TestHDFSCLI) fail when using AvroRpcEngine 
because the error string does not match.  Calling getMessage() on a remote 
exception thrown by WritableRpcEngine produces a string that contains the 
exception name followed by its getMessage(), while exceptions thrown by 
AvroRpcEngine contain just the getMessage() string of the original exception.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1077) namenode RPC protocol method names are overloaded

2010-04-02 Thread Doug Cutting (JIRA)
namenode RPC protocol method names are overloaded
-

 Key: HDFS-1077
 URL: https://issues.apache.org/jira/browse/HDFS-1077
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Reporter: Doug Cutting
Assignee: Doug Cutting


Avro RPC does not permit two different messages with the same name.  Several 
namenode RPC protocol method names are overloaded.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: HDFS Blockreport question

2010-04-02 Thread Alberich de megres
Hi again!

Anyone could help me?
I could not understand how RPC class works. For me, only tries to
instantiates a single interfaces with no declaration for some methods
like blockreport. But then it uses rpc.getproxy to get new class wich
send messages with name node.

I'm sorry for this silly question, but i am really lost at this point.

Thanks for the patience.



On Fri, Apr 2, 2010 at 2:11 AM, Alberich de megres
alberich...@gmail.com wrote:
 Hi Jay!

 thanks for the answear but i'm asking for what it works it sends?
 blockreport is an interface in DatanodeProtocol that has no
 declaration.

 thanks!



 On Thu, Apr 1, 2010 at 5:50 PM, Jay Booth jaybo...@gmail.com wrote:
 In DataNode:
 public DatanodeProtocol namenode

 It's not a reference to an actual namenode, it's a wrapper for a network
 protocol created by that RPC.waitForProxy call -- so when it calls
 namenode.blockReport, it's sending that information over RPC to the namenode
 instance over the network

 On Thu, Apr 1, 2010 at 5:50 AM, Alberich de megres 
 alberich...@gmail.comwrote:

 Hi everyone!

 sailing throught the hdfs source code that comes with hadoop 0.20.2, i
 could not understand how hdfs sends blockreport to nameNode.

 As i can see, in
 src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java we
 create this.namenode interface with RPC.waitForProxy call (wich i
 could not understand which class it instantiates, and how it works).

 After that, datanode generates block list report (blockListAsLongs)
 with data.getBlockReport, and call this.namenode.blockReport(..),
 inside namenode.blockReport it calls again namesystem.processReport.
 This leads to an update of block lists inside nameserver.

 But how it sends over the network this blockreport?

 Anyone can point me some light?

 thanks for all!
 (and sorry for the newbie question)

 Alberich





Re: HDFS Blockreport question

2010-04-02 Thread Ryan Rawson
If you look at the getProxy code it passes an Invoker (or something
like that) which the proxy code uses to delegate calls TO.  The
Invoker will call another class Client which has sub-classes like
Call, and Connection which wrap the actual java IO.  This all lives in
the org.apache.hadoop.ipc package.

Be sure to use a good IDE like IJ or Eclipse to browse the code, it
makes following all this stuff much easier.





On Fri, Apr 2, 2010 at 4:39 PM, Alberich de megres
alberich...@gmail.com wrote:
 Hi again!

 Anyone could help me?
 I could not understand how RPC class works. For me, only tries to
 instantiates a single interfaces with no declaration for some methods
 like blockreport. But then it uses rpc.getproxy to get new class wich
 send messages with name node.

 I'm sorry for this silly question, but i am really lost at this point.

 Thanks for the patience.



 On Fri, Apr 2, 2010 at 2:11 AM, Alberich de megres
 alberich...@gmail.com wrote:
 Hi Jay!

 thanks for the answear but i'm asking for what it works it sends?
 blockreport is an interface in DatanodeProtocol that has no
 declaration.

 thanks!



 On Thu, Apr 1, 2010 at 5:50 PM, Jay Booth jaybo...@gmail.com wrote:
 In DataNode:
 public DatanodeProtocol namenode

 It's not a reference to an actual namenode, it's a wrapper for a network
 protocol created by that RPC.waitForProxy call -- so when it calls
 namenode.blockReport, it's sending that information over RPC to the namenode
 instance over the network

 On Thu, Apr 1, 2010 at 5:50 AM, Alberich de megres 
 alberich...@gmail.comwrote:

 Hi everyone!

 sailing throught the hdfs source code that comes with hadoop 0.20.2, i
 could not understand how hdfs sends blockreport to nameNode.

 As i can see, in
 src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java we
 create this.namenode interface with RPC.waitForProxy call (wich i
 could not understand which class it instantiates, and how it works).

 After that, datanode generates block list report (blockListAsLongs)
 with data.getBlockReport, and call this.namenode.blockReport(..),
 inside namenode.blockReport it calls again namesystem.processReport.
 This leads to an update of block lists inside nameserver.

 But how it sends over the network this blockreport?

 Anyone can point me some light?

 thanks for all!
 (and sorry for the newbie question)

 Alberich