RE: [VOTE] Commit HDFS-927 to both 0.20 and 0.21 branch?

2010-02-02 Thread Andrew Purtell
+1

   - Andy

-- Forwarded message --
From: Stack 
Date: Tue, Feb 2, 2010 at 10:22 PM
Subject: [VOTE] Commit HDFS-927 to both 0.20 and 0.21 branch?
To: hdfs-dev@hadoop.apache.org


I'd like to open a vote on committing HDFS-927 to both hadoop branch
0.20 and to 0.21.

HDFS-927 "DFSInputStream retries too many times for new block
location" has an odd summary but in short, its a better HDFS-127
"DFSClient block read failures cause open DFSInputStream to become
unusable".  HDFS-127 is an old, popular issue that refuses to die.  We
voted on having it committed to the 0.20 branch not too long ago, see
http://www.mail-archive.com/hdfs-dev@hadoop.apache.org/msg00401.html,
only it broke TestFsck (See http://su.pr/1nylUn) so it was reverted.

High-level, HDFS-127/HDFS-927 is about fixing DFSClient so it a good
read cleans out the failures count (Previous failures 'stuck' though
there may have been hours of successful reads in betwixt).  When
rolling hadoop 0.20.2 was proposed, a few fellas including myself
raised a lack of HDFS-127 as an obstacle.

HDFS-927 has been committed to TRUNK.

I'm +1 on committing to 0.20 and to 0.21 branches.

Thanks for taking the time to take a look into this issue.
St.Ack


  



Re: [VOTE] Commit HDFS-927 to both 0.20 and 0.21 branch?

2010-02-02 Thread Tsz Wo (Nicholas), Sze
+1
Nicholas Sze




- Original Message 
> From: Stack 
> To: hdfs-dev@hadoop.apache.org
> Sent: Tue, February 2, 2010 10:22:50 PM
> Subject: [VOTE] Commit HDFS-927 to both 0.20 and 0.21 branch?
> 
> I'd like to open a vote on committing HDFS-927 to both hadoop branch
> 0.20 and to 0.21.
> 
> HDFS-927 "DFSInputStream retries too many times for new block
> location" has an odd summary but in short, its a better HDFS-127
> "DFSClient block read failures cause open DFSInputStream to become
> unusable".  HDFS-127 is an old, popular issue that refuses to die.  We
> voted on having it committed to the 0.20 branch not too long ago, see
> http://www.mail-archive.com/hdfs-dev@hadoop.apache.org/msg00401.html,
> only it broke TestFsck (See http://su.pr/1nylUn) so it was reverted.
> 
> High-level, HDFS-127/HDFS-927 is about fixing DFSClient so it a good
> read cleans out the failures count (Previous failures 'stuck' though
> there may have been hours of successful reads in betwixt).  When
> rolling hadoop 0.20.2 was proposed, a few fellas including myself
> raised a lack of HDFS-127 as an obstacle.
> 
> HDFS-927 has been committed to TRUNK.
> 
> I'm +1 on committing to 0.20 and to 0.21 branches.
> 
> Thanks for taking the time to take a look into this issue.
> St.Ack




[VOTE] Commit HDFS-927 to both 0.20 and 0.21 branch?

2010-02-02 Thread Stack
I'd like to open a vote on committing HDFS-927 to both hadoop branch
0.20 and to 0.21.

HDFS-927 "DFSInputStream retries too many times for new block
location" has an odd summary but in short, its a better HDFS-127
"DFSClient block read failures cause open DFSInputStream to become
unusable".  HDFS-127 is an old, popular issue that refuses to die.  We
voted on having it committed to the 0.20 branch not too long ago, see
http://www.mail-archive.com/hdfs-dev@hadoop.apache.org/msg00401.html,
only it broke TestFsck (See http://su.pr/1nylUn) so it was reverted.

High-level, HDFS-127/HDFS-927 is about fixing DFSClient so it a good
read cleans out the failures count (Previous failures 'stuck' though
there may have been hours of successful reads in betwixt).  When
rolling hadoop 0.20.2 was proposed, a few fellas including myself
raised a lack of HDFS-127 as an obstacle.

HDFS-927 has been committed to TRUNK.

I'm +1 on committing to 0.20 and to 0.21 branches.

Thanks for taking the time to take a look into this issue.
St.Ack


[jira] Resolved: (HDFS-933) Add createIdentifier() implementation to DelegationTokenSecretManager

2010-02-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das resolved HDFS-933.
--

   Resolution: Fixed
Fix Version/s: 0.22.0
 Hadoop Flags: [Reviewed]

> Add createIdentifier() implementation to DelegationTokenSecretManager
> -
>
> Key: HDFS-933
> URL: https://issues.apache.org/jira/browse/HDFS-933
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Fix For: 0.22.0
>
> Attachments: h6419-01.patch, h6419-07.patch, h6419-09.patch
>
>
> abstract method createIdentifier() is being added in Common (HADOOP-6419) and 
> needs to be implemented by DelegationTokenSecretManager. This allows the RPC 
> Server's authentication layer to deserialize received TokenIdentifiers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-944) FileContext/AFS need to provide the full ClientProtocol interface

2010-02-02 Thread Eli Collins (JIRA)
FileContext/AFS need to provide the full ClientProtocol interface
-

 Key: HDFS-944
 URL: https://issues.apache.org/jira/browse/HDFS-944
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client
Reporter: Eli Collins


FileContext and AbstractFileSystem are missing ClientProtocol APIs like 
setQuota and concat, they need to be brought up to parity so we can port 
classes that use FileSystem and DistributedFileSystem.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-943) Port DFSAdmin to FileContext

2010-02-02 Thread Eli Collins (JIRA)
Port DFSAdmin to FileContext


 Key: HDFS-943
 URL: https://issues.apache.org/jira/browse/HDFS-943
 Project: Hadoop HDFS
  Issue Type: Task
  Components: tools
Reporter: Eli Collins


DFSAdmin currently uses DistributedFileSystem, it should use FileContext and 
Hdfs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-942) Read and write metrics misleading/not helpful

2010-02-02 Thread ryan rawson (JIRA)
Read and write metrics misleading/not helpful
-

 Key: HDFS-942
 URL: https://issues.apache.org/jira/browse/HDFS-942
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ryan rawson


the metrics wrap the entire readBlock() writeBlock() call (in DataXciever#run())

The problem with this is it includes the client socket read/write time. This 
means if a client is slow at streaming a write block (eg: in hbase) then it 
skews the write metrics and makes it look like the server is really slow.  Same 
problem with reads.

It would be nice if reads/writes meant actually local disk io and related 
processing (eg: checksum verification/generation)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-941) Datanode xceiver protocol should allow reuse of a connection

2010-02-02 Thread Todd Lipcon (JIRA)
Datanode xceiver protocol should allow reuse of a connection


 Key: HDFS-941
 URL: https://issues.apache.org/jira/browse/HDFS-941
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node, hdfs client
Affects Versions: 0.22.0
Reporter: Todd Lipcon


Right now each connection into the datanode xceiver only processes one 
operation.

In the case that an operation leaves the stream in a well-defined state (eg a 
client reads to the end of a block successfully) the same connection could be 
reused for a second operation. This should improve random read performance 
significantly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.