[jira] Assigned: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan reassigned HDFS-646:
---

Assignee: Giridharan Kesavan

> missing test-contrib ant target would break hudson patch test process
> -
>
> Key: HDFS-646
> URL: https://issues.apache.org/jira/browse/HDFS-646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HDFS-646:


Attachment: hdfs-646.patch

adds the test-contrib target.
tnx!

> missing test-contrib ant target would break hudson patch test process
> -
>
> Key: HDFS-646
> URL: https://issues.apache.org/jira/browse/HDFS-646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: hdfs-646.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HDFS-646:


Status: Patch Available  (was: Open)

> missing test-contrib ant target would break hudson patch test process
> -
>
> Key: HDFS-646
> URL: https://issues.apache.org/jira/browse/HDFS-646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: hdfs-646.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758643#action_12758643
 ] 

Hadoop QA commented on HDFS-646:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12420351/hdfs-646.patch
  against trunk revision 817863.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/40/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/40/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/40/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/40/console

This message is automatically generated.

> missing test-contrib ant target would break hudson patch test process
> -
>
> Key: HDFS-646
> URL: https://issues.apache.org/jira/browse/HDFS-646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: hdfs-646.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758733#action_12758733
 ] 

Hadoop QA commented on HDFS-646:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12420351/hdfs-646.patch
  against trunk revision 817863.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/41/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/41/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/41/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/41/console

This message is automatically generated.

> missing test-contrib ant target would break hudson patch test process
> -
>
> Key: HDFS-646
> URL: https://issues.apache.org/jira/browse/HDFS-646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: hdfs-646.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-597) Mofication introduced by HDFS-537 breakes an advice binding in FSDatasetAspects

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-597:


Attachment: HDFS-597.patch

This version of the patch represents a way better fix of the problem

> Mofication introduced by HDFS-537 breakes an advice binding in 
> FSDatasetAspects
> ---
>
> Key: HDFS-597
> URL: https://issues.apache.org/jira/browse/HDFS-597
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-597.patch, HDFS-597.patch
>
>
> HDFS-537's patch removed {{createBlockWriteStreams}} method which was bound 
> by {{FSDatasetAspects.callCreateBlockWriteStream}} poincut.
> While this hasn't broke any tests there's a certain number of JIRAs which 
> were reproduced by the injection of this particular fault.
> AJC compiler issues warnings during the build when something like above is 
> happening. These warnings have to be watched carefully.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-222) Support for concatenating of files into a single file

2009-09-23 Thread Boris Shkolnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Shkolnik updated HDFS-222:


Attachment: HDFS-222.patch

first draft

> Support for concatenating of files into a single file
> -
>
> Key: HDFS-222
> URL: https://issues.apache.org/jira/browse/HDFS-222
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Venkatesh S
>Assignee: Boris Shkolnik
> Attachments: HDFS-222.patch
>
>
> An API to concatenate files of same size and replication factor on HDFS into 
> a single larger file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-627) Support replica update in datanode

2009-09-23 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758787#action_12758787
 ] 

Hairong Kuang commented on HDFS-627:


> I cannot see any codes in FSDataset#finalizeBlock checking "finalized" 
> directory. Could you give me more hints?
You need to move the replica from directory "rbw" to directory "finalized". 
This is done by FSVolume#addBlock(Block, File).

> Support replica update in datanode
> --
>
> Key: HDFS-627
> URL: https://issues.apache.org/jira/browse/HDFS-627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node
>Affects Versions: Append Branch
>Reporter: Tsz Wo (Nicholas), SZE
> Fix For: Append Branch
>
> Attachments: h627_20090917.patch, h627_20090921.patch, 
> h627_20090921b.patch, h627_20090922.patch
>
>
> This is a followed up issue of HDFS-619.  We are going to implement step 4c 
> described in the block recovery algorithm.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-647) Internal server errors

2009-09-23 Thread gary murry (JIRA)
Internal server errors
--

 Key: HDFS-647
 URL: https://issues.apache.org/jira/browse/HDFS-647
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: gary murry


Currently 15 tests are failing during the build.  They all are complaining 
about "Server returned HTTP response code: 500 for URL:"  where the URL varies. 
 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-647) Internal server errors

2009-09-23 Thread gary murry (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758792#action_12758792
 ] 

gary murry commented on HDFS-647:
-

The test that are failing 
(http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-Hdfs-trunk/92/testReport/)
* org.apache.hadoop.hdfs.TestMissingBlocksAlert.testMissingBlocksAlert
* org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint
* org.apache.hadoop.hdfs.server.namenode.TestCheckpoint.testCheckpoint
* org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsck
* org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckNonExistent
* org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckPermission
* org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckMove
* org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckOpenFiles
* org.apache.hadoop.hdfs.server.namenode.TestFsck.testCorruptBlock
* org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckError
* 
org.apache.hadoop.hdfs.server.namenode.TestNameEditsConfigs.testNameEditsConfigs
* org.apache.hadoop.hdfs.server.namenode.TestStartup.testChkpointStartup2
* org.apache.hadoop.hdfs.server.namenode.TestStartup.testChkpointStartup1
* org.apache.hadoop.hdfs.server.namenode.TestStartup.testSNNStartup
* 
org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore

> Internal server errors
> --
>
> Key: HDFS-647
> URL: https://issues.apache.org/jira/browse/HDFS-647
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: gary murry
>
> Currently 15 tests are failing during the build.  They all are complaining 
> about "Server returned HTTP response code: 500 for URL:"  where the URL 
> varies.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-647) Internal server errors

2009-09-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758793#action_12758793
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-647:
-

Look like that the cause is
{noformat}
2009-09-23 12:51:13,090 ERROR mortbay.log (?:invoke0(?)) - /dfshealth.jsp
java.lang.NullPointerException
at 
org.apache.hadoop.http.HtmlQuoting.quoteHtmlChars(HtmlQuoting.java:95)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter$RequestQuoter.getParameter(HttpServer.java:570)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper$HealthJsp.generateHealthReport(NamenodeJspHelper.java:168)
at 
org.apache.hadoop.hdfs.server.namenode.dfshealth_jsp._jspService(dfshealth_jsp.java:96)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1124)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:613)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1115)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:361)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:324)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)
{noformat}


> Internal server errors
> --
>
> Key: HDFS-647
> URL: https://issues.apache.org/jira/browse/HDFS-647
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: gary murry
>
> Currently 15 tests are failing during the build.  They all are complaining 
> about "Server returned HTTP response code: 500 for URL:"  where the URL 
> varies.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-627) Support replica update in datanode

2009-09-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-627:


Attachment: h627_20090923.patch

Thanks, Hairong.

h627_20090923.patch: calling finalizeBlock(..).

> Support replica update in datanode
> --
>
> Key: HDFS-627
> URL: https://issues.apache.org/jira/browse/HDFS-627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node
>Affects Versions: Append Branch
>Reporter: Tsz Wo (Nicholas), SZE
> Fix For: Append Branch
>
> Attachments: h627_20090917.patch, h627_20090921.patch, 
> h627_20090921b.patch, h627_20090922.patch, h627_20090923.patch
>
>
> This is a followed up issue of HDFS-619.  We are going to implement step 4c 
> described in the block recovery algorithm.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-222) Support for concatenating of files into a single file

2009-09-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758817#action_12758817
 ] 

Konstantin Shvachko commented on HDFS-222:
--

Interesting that read just works with different lengths!
I think you should combine the incomplete block and full block tests into one 
with somehow randomized block lengths, so that your 10 files had all different 
lengths.
It would be interesting also to test concatenation of files with different 
replication factors. So that the concat code takes care of removing or adding 
replicas of the blocks of the new file.
Boris, could you please revert massive refactoring of imports in FSNamesystem 
and submit it as a separate patch. We need to keep three branches in sync now, 
and this change will not belong to the other two.

> Support for concatenating of files into a single file
> -
>
> Key: HDFS-222
> URL: https://issues.apache.org/jira/browse/HDFS-222
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Venkatesh S
>Assignee: Boris Shkolnik
> Attachments: HDFS-222.patch
>
>
> An API to concatenate files of same size and replication factor on HDFS into 
> a single larger file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758821#action_12758821
 ] 

Suresh Srinivas commented on HDFS-624:
--

Comments:
# getNewStampForPipeline - is it better to rename this as getNewGenerationStamp?
# DFSClient.java - ResponseProcessor.run() - To avoid a warning in style check 
for empty block, can the if statement be changed to {{ if (seqno != -2) }}
# ResponseProcessor.run() - In the for loop before throwing exception should 
{{responderClosed = true}} be set?
# ResponseProcessor.run() - why is the try - catch block not inside the for 
loop? Is setting errorIndex to 0 fine?

> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: pipelineRecovery.patch, pipelineRecovery1.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758832#action_12758832
 ] 

Konstantin Shvachko commented on HDFS-624:
--

I agree about naming ReplicaNotFoundException and getNewGenerationStamp.

For {{ClientProtocol.updatePipeline()}} should we rather adopt the following 
signature:
{code}
updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] 
newNodes)
{code}


> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: pipelineRecovery.patch, pipelineRecovery1.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758858#action_12758858
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-624:
-

> Not sure if you want to fix it: The terms "block" and "replica" should be 
> used with care

I mean we could fix it in another issue since changing the terminology is not 
in the scope of this issue.

> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: pipelineRecovery.patch, pipelineRecovery1.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)
Public access is required for some of methods in AppendTestUtil
---

 Key: HDFS-648
 URL: https://issues.apache.org/jira/browse/HDFS-648
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Konstantin Boudnik


Some of AppendTestUtil's methods are useful across the board. Thus, public 
access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik reassigned HDFS-648:
---

Assignee: Konstantin Boudnik

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-648:


Affects Version/s: Append Branch

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-648:


Attachment: HDFS-648.patch

Trivial fix to change visibility of some of test utils

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-648:


Status: Patch Available  (was: Open)

The fix is so trivial that I'm going to submit it as patch right away

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-648:


Attachment: HDFS-648.patch

the class has to have public modifier as well

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch, HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-222) Support for concatenating of files into a single file

2009-09-23 Thread Boris Shkolnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Shkolnik updated HDFS-222:


Attachment: HDFS-222-1.patch

reverted imports

> Support for concatenating of files into a single file
> -
>
> Key: HDFS-222
> URL: https://issues.apache.org/jira/browse/HDFS-222
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Venkatesh S
>Assignee: Boris Shkolnik
> Attachments: HDFS-222-1.patch, HDFS-222.patch
>
>
> An API to concatenate files of same size and replication factor on HDFS into 
> a single larger file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-222) Support for concatenating of files into a single file

2009-09-23 Thread Boris Shkolnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758886#action_12758886
 ] 

Boris Shkolnik commented on HDFS-222:
-

at this point we require all the files to have full blocks (except the final 
one) and same replication.

I will revert the imports to make the merge easier.

> Support for concatenating of files into a single file
> -
>
> Key: HDFS-222
> URL: https://issues.apache.org/jira/browse/HDFS-222
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Venkatesh S
>Assignee: Boris Shkolnik
> Attachments: HDFS-222-1.patch, HDFS-222.patch
>
>
> An API to concatenate files of same size and replication factor on HDFS into 
> a single larger file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758892#action_12758892
 ] 

Hadoop QA commented on HDFS-648:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12420402/HDFS-648.patch
  against trunk revision 817863.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/42/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/42/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/42/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/42/console

This message is automatically generated.

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch, HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758893#action_12758893
 ] 

Konstantin Shvachko commented on HDFS-624:
--

May be {{updateBlockForPipeline()}} would be a better name for 
{{getNewStampForPipeline()}}.

> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: pipelineRecovery.patch, pipelineRecovery1.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-624:


Attachment: HDFS-624-aspects.patch

The patch fixes the issue with aspects binding and two potential NPE spots when 
non-pipeline tests are running

> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: HDFS-624-aspects.patch, pipelineRecovery.patch, 
> pipelineRecovery1.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-649) When non pipeline related tests are executed some advices causing NPE

2009-09-23 Thread Konstantin Boudnik (JIRA)
When non pipeline related tests are executed some advices causing NPE
-

 Key: HDFS-649
 URL: https://issues.apache.org/jira/browse/HDFS-649
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: Append Branch
Reporter: Konstantin Boudnik


{{DataTransferProtocolAspects}} has some advices which might through 
NullPointerException when non-pipeline related tests are executed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-649) When non pipeline related tests are executed some advices causing NPE

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-649:


Attachment: HDFS-649.patch

This is the fix for the problem.
No test is required for this patch for an existing ones should work just fine.

> When non pipeline related tests are executed some advices causing NPE
> -
>
> Key: HDFS-649
> URL: https://issues.apache.org/jira/browse/HDFS-649
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
> Attachments: HDFS-649.patch
>
>
> {{DataTransferProtocolAspects}} has some advices which might through 
> NullPointerException when non-pipeline related tests are executed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-649) When non pipeline related tests are executed some advices causing NPE

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik reassigned HDFS-649:
---

Assignee: Konstantin Boudnik

> When non pipeline related tests are executed some advices causing NPE
> -
>
> Key: HDFS-649
> URL: https://issues.apache.org/jira/browse/HDFS-649
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-649.patch
>
>
> {{DataTransferProtocolAspects}} has some advices which might through 
> NullPointerException when non-pipeline related tests are executed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-649) When non pipeline related tests are executed some advices causing NPE

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-649:


Status: Patch Available  (was: Open)

Submitting the patch

> When non pipeline related tests are executed some advices causing NPE
> -
>
> Key: HDFS-649
> URL: https://issues.apache.org/jira/browse/HDFS-649
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
> Attachments: HDFS-649.patch
>
>
> {{DataTransferProtocolAspects}} has some advices which might through 
> NullPointerException when non-pipeline related tests are executed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-649) When non pipeline related tests are executed some advices causing NPE

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-649:


Component/s: test

> When non pipeline related tests are executed some advices causing NPE
> -
>
> Key: HDFS-649
> URL: https://issues.apache.org/jira/browse/HDFS-649
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-649.patch
>
>
> {{DataTransferProtocolAspects}} has some advices which might through 
> NullPointerException when non-pipeline related tests are executed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-637) DataNode sends an Success ack when block write fails

2009-09-23 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758906#action_12758906
 ] 

Hairong Kuang commented on HDFS-637:


All unit tests passed. Test patch result:
 [exec] +1 overall.
 [exec]
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec]
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no new tests are needed 
for this patch.
 [exec] Also please list what manual steps were 
performed to verify this patch.
 [exec]
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec]
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec]
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
 [exec]
 [exec] +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.

Unit test is not included because HDFS-624 exposed the bug and with this patch 
the previously failed test is passed.


> DataNode sends an Success ack when block write fails
> 
>
> Key: HDFS-637
> URL: https://issues.apache.org/jira/browse/HDFS-637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
>Priority: Blocker
> Fix For: 0.21.0
>
> Attachments: interrupted.patch, interrupted1.patch
>
>
> When I work on HDFS-624, I saw TestFileAppend3#TC7 occasionally fails. After 
> lots of debug, I saw that the client unexpected received a response of "-2 
> SUCCESS SUCCESS" in which -2 is the packet sequence number. This happened in 
> a pipeline of 2 datanodes and one of them failed. It turned out when block 
> receiver fails, it shuts down itself and interrupts the packet responder but 
> responder tries to handle interruption with the condition 
> "Thread.isInterrupted()" but unfortunately a thread's interrupt status is not 
> set in some cases as explained in the Thread#interrupt javadoc:
>  If this thread is blocked in an invocation of the wait(), wait(long), or 
> wait(long, int) methods of the Object  class, or of the join(), join(long), 
> join(long, int), sleep(long), or sleep(long, int), methods of this class, 
> then its interrupt status will be cleared and it will receive an 
> InterruptedException. 
> So datanode does not detect the interruption and continues as if no error 
> occurs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758907#action_12758907
 ] 

Konstantin Boudnik commented on HDFS-648:
-

The test failures seem to be unrelated and are described in HDFS-647

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch, HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-519) Create new tests for lease recovery

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik reassigned HDFS-519:
---

Assignee: Konstantin Boudnik

> Create new tests for lease recovery
> ---
>
> Key: HDFS-519
> URL: https://issues.apache.org/jira/browse/HDFS-519
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
>
> According to the test plan a number of new features are going to be 
> implemented as a part of this umbrella (HDFS-265) JIRA.
> These new features are have to be tested properly. Lease recovery is one of 
> new functionality which require new tests to be developed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-648:


Attachment: HDFS-648.patch

Adding Javadoc comments to publicly visible methods. Thanks, Nicholas.

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch, HDFS-648.patch, HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-637) DataNode sends a Success ack when block write fails

2009-09-23 Thread Hairong Kuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang updated HDFS-637:
---

Hadoop Flags: [Reviewed]
 Summary: DataNode sends a Success ack when block write fails  (was: 
DataNode sends an Success ack when block write fails)

> DataNode sends a Success ack when block write fails
> ---
>
> Key: HDFS-637
> URL: https://issues.apache.org/jira/browse/HDFS-637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
>Priority: Blocker
> Fix For: 0.21.0
>
> Attachments: interrupted.patch, interrupted1.patch
>
>
> When I work on HDFS-624, I saw TestFileAppend3#TC7 occasionally fails. After 
> lots of debug, I saw that the client unexpected received a response of "-2 
> SUCCESS SUCCESS" in which -2 is the packet sequence number. This happened in 
> a pipeline of 2 datanodes and one of them failed. It turned out when block 
> receiver fails, it shuts down itself and interrupts the packet responder but 
> responder tries to handle interruption with the condition 
> "Thread.isInterrupted()" but unfortunately a thread's interrupt status is not 
> set in some cases as explained in the Thread#interrupt javadoc:
>  If this thread is blocked in an invocation of the wait(), wait(long), or 
> wait(long, int) methods of the Object  class, or of the join(), join(long), 
> join(long, int), sleep(long), or sleep(long, int), methods of this class, 
> then its interrupt status will be cleared and it will receive an 
> InterruptedException. 
> So datanode does not detect the interruption and continues as if no error 
> occurs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HDFS-637) DataNode sends a Success ack when block write fails

2009-09-23 Thread Hairong Kuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang resolved HDFS-637.


Resolution: Fixed

I've just committed this.

> DataNode sends a Success ack when block write fails
> ---
>
> Key: HDFS-637
> URL: https://issues.apache.org/jira/browse/HDFS-637
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
>Priority: Blocker
> Fix For: 0.21.0
>
> Attachments: interrupted.patch, interrupted1.patch
>
>
> When I work on HDFS-624, I saw TestFileAppend3#TC7 occasionally fails. After 
> lots of debug, I saw that the client unexpected received a response of "-2 
> SUCCESS SUCCESS" in which -2 is the packet sequence number. This happened in 
> a pipeline of 2 datanodes and one of them failed. It turned out when block 
> receiver fails, it shuts down itself and interrupts the packet responder but 
> responder tries to handle interruption with the condition 
> "Thread.isInterrupted()" but unfortunately a thread's interrupt status is not 
> set in some cases as explained in the Thread#interrupt javadoc:
>  If this thread is blocked in an invocation of the wait(), wait(long), or 
> wait(long, int) methods of the Object  class, or of the join(), join(long), 
> join(long, int), sleep(long), or sleep(long, int), methods of this class, 
> then its interrupt status will be cleared and it will receive an 
> InterruptedException. 
> So datanode does not detect the interruption and continues as if no error 
> occurs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-222) Support for concatenating of files into a single file

2009-09-23 Thread Boris Shkolnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Shkolnik updated HDFS-222:


Attachment: HDFS-222-2.patch

added test for permissions

> Support for concatenating of files into a single file
> -
>
> Key: HDFS-222
> URL: https://issues.apache.org/jira/browse/HDFS-222
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Venkatesh S
>Assignee: Boris Shkolnik
> Attachments: HDFS-222-1.patch, HDFS-222-2.patch, HDFS-222.patch
>
>
> An API to concatenate files of same size and replication factor on HDFS into 
> a single larger file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-648) Public access is required for some of methods in AppendTestUtil

2009-09-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-648:


Hadoop Flags: [Reviewed]

+1 patch looks good.

The append branch seems currently broken.  I cannot run test-patch on any patch.

> Public access is required for some of methods in AppendTestUtil
> ---
>
> Key: HDFS-648
> URL: https://issues.apache.org/jira/browse/HDFS-648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-648.patch, HDFS-648.patch, HDFS-648.patch
>
>
> Some of AppendTestUtil's methods are useful across the board. Thus, public 
> access is need for those

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-627) Support replica update in datanode

2009-09-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-627:


Attachment: h627_20090923b.patch

h627_20090923b.patch: use detachFile(..) instead of detachBlock(..)

> Support replica update in datanode
> --
>
> Key: HDFS-627
> URL: https://issues.apache.org/jira/browse/HDFS-627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node
>Affects Versions: Append Branch
>Reporter: Tsz Wo (Nicholas), SZE
> Fix For: Append Branch
>
> Attachments: h627_20090917.patch, h627_20090921.patch, 
> h627_20090921b.patch, h627_20090922.patch, h627_20090923.patch, 
> h627_20090923b.patch
>
>
> This is a followed up issue of HDFS-619.  We are going to implement step 4c 
> described in the block recovery algorithm.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-200) In HDFS, sync() not yet guarantees data available to the new readers

2009-09-23 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758946#action_12758946
 ] 

ryan rawson commented on HDFS-200:
--

I'm not sure exactly what I'm seeing, here is some flow of events:

namenode says when the regionserver closes a log:

2009-09-23 17:21:05,128 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.blockReceived: from 10.10.21.38:50010 1 blocks.
2009-09-23 17:21:05,128 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.blockReceived: blk_8594965619504827451_4351 is received from 
10.10.21.38:50010
2009-09-23 17:21:05,128 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.10.21.38:50010 is added to 
blk_859496561950482745
1_4351 size 573866
2009-09-23 17:21:05,130 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.blockReceived: from 10.10.21.45:50010 1 blocks.
2009-09-23 17:21:05,130 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.blockReceived: blk_8594965619504827451_4351 is received from 
10.10.21.45:50010
2009-09-23 17:21:05,130 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.10.21.45:50010 is added to 
blk_859496561950482745
1_4351 size 573866
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.blockReceived: from 10.10.21.32:50010 1 blocks.
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.blockReceived: blk_8594965619504827451_4351 is received from 
10.10.21.32:50010
2009-09-23 17:21:05,131 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.10.21.32:50010 is added to 
blk_859496561950482745
1_4351 size 573866
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: *DIR* 
NameNode.complete: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for DFSClien
t_-2129099062
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.completeFile: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for DFS
Client_-2129099062
2009-09-23 17:21:05,132 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
FSDirectory.closeFile: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 with 1 bl
ocks is persisted to the file system
2009-09-23 17:21:05,132 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.completeFile: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 blockli
st persisted

So at this point we have 3 blocks, they have all checked in, right?

then during logfile recovery we see:

2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: *DIR* 
NameNode.append: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for DFSCl
ient_-828773542 at 10.10.21.29
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.startFile: 
src=/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228, holde
r=DFSClient_-828773542, clientMachine=10.10.21.29, replication=512, 
overwrite=false, append=true
2009-09-23 17:21:45,997 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 
567 Total time for transactions(ms): 9Number of transactions ba
tched in Syncs: 54 Number of syncs: 374 SyncTimes(ms): 12023 4148 3690 7663 
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: 
UnderReplicationBlocks.update blk_8594965619504827451_4351 curReplicas 0 
curExpectedReplicas 3 oldReplicas 0 oldExpectedReplicas  3 curPri  2 oldPri  2
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.UnderReplicationBlock.update:blk_8594965619504827451_4351 has only 0 
replicas and need 3 replicas so is added to neededReplications at priority 
level 2
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.appendFile: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for 
DFSClient_-828773542 at 10.10.21.29 block blk_8594965619504827451_4351 block 
size 573866
2009-09-23 17:21:45,997 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop
   ip=/10.10.21.29 cmd=append  
src=/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228   
dst=nullperm=null
2009-09-23 17:21:47,265 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.UnderReplicationBlock.remove: Removing block 
blk_8594965619504827451_4351 from priority queue 2
2009-09-23 17:21:56,016 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.addBlock: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for 
DFSClient_-828773542
2009-09-23 17:21:56,016 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.getAdditionalBlock: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for 
DFSClient_-828773542
2009-09-23 17:21:56,016 DEBU

[jira] Commented: (HDFS-200) In HDFS, sync() not yet guarantees data available to the new readers

2009-09-23 Thread ryan rawson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758949#action_12758949
 ] 

ryan rawson commented on HDFS-200:
--

scratch the last, i was having some environment/library version problems.

> In HDFS, sync() not yet guarantees data available to the new readers
> 
>
> Key: HDFS-200
> URL: https://issues.apache.org/jira/browse/HDFS-200
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: dhruba borthakur
>Priority: Blocker
> Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt, 
> fsyncConcurrentReaders11_20.txt, fsyncConcurrentReaders12_20.txt, 
> fsyncConcurrentReaders13_20.txt, fsyncConcurrentReaders14_20.txt, 
> fsyncConcurrentReaders3.patch, fsyncConcurrentReaders4.patch, 
> fsyncConcurrentReaders5.txt, fsyncConcurrentReaders6.patch, 
> fsyncConcurrentReaders9.patch, 
> hadoop-stack-namenode-aa0-000-12.u.powerset.com.log.gz, 
> hdfs-200-ryan-existing-file-fail.txt, hypertable-namenode.log.gz, 
> namenode.log, namenode.log, Reader.java, Reader.java, reopen_test.sh, 
> ReopenProblem.java, Writer.java, Writer.java
>
>
> In the append design doc 
> (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it 
> says
> * A reader is guaranteed to be able to read data that was 'flushed' before 
> the reader opened the file
> However, this feature is not yet implemented.  Note that the operation 
> 'flushed' is now called "sync".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-519) Create new tests for lease recovery

2009-09-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-519:


Attachment: HDFS-519.patch

First test case to very normal lease recovery process. 
{{append()}} call doesn't work yet

> Create new tests for lease recovery
> ---
>
> Key: HDFS-519
> URL: https://issues.apache.org/jira/browse/HDFS-519
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-519.patch
>
>
> According to the test plan a number of new features are going to be 
> implemented as a part of this umbrella (HDFS-265) JIRA.
> These new features are have to be tested properly. Lease recovery is one of 
> new functionality which require new tests to be developed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-649) When non pipeline related tests are executed some advices causing NPE

2009-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758953#action_12758953
 ] 

Hadoop QA commented on HDFS-649:


-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12420412/HDFS-649.patch
  against trunk revision 817863.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

-1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/43/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/43/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/43/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/43/console

This message is automatically generated.

> When non pipeline related tests are executed some advices causing NPE
> -
>
> Key: HDFS-649
> URL: https://issues.apache.org/jira/browse/HDFS-649
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: Append Branch
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HDFS-649.patch
>
>
> {{DataTransferProtocolAspects}} has some advices which might through 
> NullPointerException when non-pipeline related tests are executed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758958#action_12758958
 ] 

Hairong Kuang commented on HDFS-624:


Cos, the patch that you uploaded today broke TestFiDataTransferProtocol, but 
the one you gave it to you does not. Could you please take a quick look at the 
problem?

> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: HDFS-624-aspects.patch, pipelineRecovery.patch, 
> pipelineRecovery1.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-636) SafeMode should count only complete blocks.

2009-09-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758967#action_12758967
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-636:
-

The append branch still has problems: test-patch fails on
{noformat}
 [exec] X [0] images/hdfs-logo.jpg  
 
BROKEN: 
/home/tsz/hadoop/hdfs/testpatch-append/src/docs/src/documentation/contenn
t/xdocs/images.hdfs-logo.jpg (No such file or directory)
{noformat}

> SafeMode should count only complete blocks.
> ---
>
> Key: HDFS-636
> URL: https://issues.apache.org/jira/browse/HDFS-636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: Append Branch
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: Append Branch
>
> Attachments: completeBlockTotal.patch
>
>
> During start up the name-node is in safe mode and is counting blocks reported 
> by data-nodes. When the number of minimally replicated blocks reaches the 
> configured threshold the name-node leaves safe mode. Currently all blocks are 
> counted towards the threshold including the ones that are under construction. 
> The under-construction blocks should be excluded from the count, because they 
> need to be recovered, which may take long time (lease expires in 1 hour by 
> default). Also the recovery may result in deleting those blocks so counting 
> them in the blocks total is incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-636) SafeMode should count only complete blocks.

2009-09-23 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758969#action_12758969
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-636:
-

The file src/docs/src/documentation/resources/images/hdfs-logo.jpg was added by 
HDFS-574 and append-branch/CHANGES.txt contains HDFS-574.  However, 
hdfs-logo.jpg is not in the append-branch.

> SafeMode should count only complete blocks.
> ---
>
> Key: HDFS-636
> URL: https://issues.apache.org/jira/browse/HDFS-636
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: name-node
>Affects Versions: Append Branch
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
> Fix For: Append Branch
>
> Attachments: completeBlockTotal.patch
>
>
> During start up the name-node is in safe mode and is counting blocks reported 
> by data-nodes. When the number of minimally replicated blocks reaches the 
> configured threshold the name-node leaves safe mode. Currently all blocks are 
> counted towards the threshold including the ones that are under construction. 
> The under-construction blocks should be excluded from the count, because they 
> need to be recovered, which may take long time (lease expires in 1 hour by 
> default). Also the recovery may result in deleting those blocks so counting 
> them in the blocks total is incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-245) Create symbolic links in HDFS

2009-09-23 Thread Nigel Daley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758995#action_12758995
 ] 

Nigel Daley commented on HDFS-245:
--

Thanks Eli.  The design states:
{quote}
Loops should be avoided by having the client limit the number of links it will 
traverse
{quote}
What about loops within a filesystem?  Does the NN also limit the number of 
links it will traverse? 

You give examples of commands the operate on the link and examples that operate 
on the link target and examples of those that depend on a trailing slash.  
Given this is a design, can you be explicitly and enumerate the commands for 
each of there?  For instance, setReplication, setTimes, du, etc.

What is the new options for fsck to report dangling links?  What does the 
output look like?

What is the new option for distcp to follow symlinks?  

If distcp doesn't follow symlinks, I assume it just copies the symlink.  In 
this case, is the symlink adjusted to point to the source location on the 
source FS?

What does the ls output look like for a symlink?

Do symlinks contribute bytes toward a quota?




> Create symbolic links in HDFS
> -
>
> Key: HDFS-245
> URL: https://issues.apache.org/jira/browse/HDFS-245
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: dhruba borthakur
>Assignee: dhruba borthakur
> Attachments: 4044_20081030spi.java, designdocv1.txt, 
> HADOOP-4044-strawman.patch, symlink-0.20.0.patch, symLink1.patch, 
> symLink1.patch, symLink11.patch, symLink12.patch, symLink13.patch, 
> symLink14.patch, symLink15.txt, symLink15.txt, symlink16-common.patch, 
> symlink16-hdfs.patch, symlink16-mr.patch, symLink4.patch, symLink5.patch, 
> symLink6.patch, symLink8.patch, symLink9.patch
>
>
> HDFS should support symbolic links. A symbolic link is a special type of file 
> that contains a reference to another file or directory in the form of an 
> absolute or relative path and that affects pathname resolution. Programs 
> which read or write to files named by a symbolic link will behave as if 
> operating directly on the target file. However, archiving utilities can 
> handle symbolic links specially and manipulate them directly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HDFS-646:


Priority: Blocker  (was: Major)

> missing test-contrib ant target would break hudson patch test process
> -
>
> Key: HDFS-646
> URL: https://issues.apache.org/jira/browse/HDFS-646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hdfs-646.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-646) missing test-contrib ant target would break hudson patch test process

2009-09-23 Thread Nigel Daley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nigel Daley updated HDFS-646:
-

Hadoop Flags: [Reviewed]

+1 code review.

> missing test-contrib ant target would break hudson patch test process
> -
>
> Key: HDFS-646
> URL: https://issues.apache.org/jira/browse/HDFS-646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
>Priority: Blocker
> Attachments: hdfs-646.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759024#action_12759024
 ] 

Hairong Kuang commented on HDFS-624:


> but the one you gave it to you does not. 
I meant that the one you gave to me yesterday does not.

> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: HDFS-624-aspects.patch, pipelineRecovery.patch, 
> pipelineRecovery1.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-624) Client support pipeline recovery

2009-09-23 Thread Hairong Kuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hairong Kuang updated HDFS-624:
---

Attachment: pipelineRecovery2.patch

This patch incorporates the following comments:
1.  In FSDataset.append(..), should we check whether newGS > replicaInfo's gs?
2.   BlockNotFoundException should be ReplicaNotFoundException 
3. For ClientProtocol.updatePipeline() should we rather adopt the following 
signature:
updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] 
newNodes)
4. May be updateBlockForPipeline() would be a better name for 
getNewStampForPipeline().

It also has some changes in aspect made by Cos and a change in the fault inject 
pipeline tests to make them work with new pipeline code.

> Client support pipeline recovery
> 
>
> Key: HDFS-624
> URL: https://issues.apache.org/jira/browse/HDFS-624
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: Append Branch
>Reporter: Hairong Kuang
>Assignee: Hairong Kuang
> Fix For: Append Branch
>
> Attachments: HDFS-624-aspects.patch, pipelineRecovery.patch, 
> pipelineRecovery1.patch, pipelineRecovery2.patch
>
>
> This jira aims to
> 1. set up initial pipeline for append;
> 2. recover failed pipeline setup for append;
> 2. set up pipeline to recover failed data streaming.
> The algorithm is described in the design document in the pipeline recovery 
> and pipeline set up sections. Pipeline close and failed pipeline close are 
> not included in this jira. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-641) Move all of the benchmarks and tests that depend on mapreduce to mapreduce

2009-09-23 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759036#action_12759036
 ] 

Owen O'Malley commented on HDFS-641:


That had been the plan, but in practice, it didn't work well. To build, you 
needed to compile common, and update the common jars in hdfs and mapred. Then 
you compile hdfs and push the jar to mapreduce. Then you compile mapreduce and 
push to hdfs. Then you compile hdfs-test and push to mapreduce. Then you 
compile mapreduce test and push it to hdfs. Then you run the hdfs tests. Then 
you run the mapreduce tests. 

By comparison, if we break the cycle, we can compile common, test common, 
compile hdfs, test hdfs, compile mapreduce, and test mapreduce. Yes, we need to 
do more work to test hdfs without mapreduce. But this is a good change.

> Move all of the benchmarks and tests that depend on mapreduce to mapreduce
> --
>
> Key: HDFS-641
> URL: https://issues.apache.org/jira/browse/HDFS-641
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.20.2
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Blocker
> Fix For: 0.21.0
>
>
> Currently, we have a bad cycle where to build hdfs you need to test mapreduce 
> and iterate once. This is broken.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-627) Support replica update in datanode

2009-09-23 Thread Hairong Kuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759037#action_12759037
 ] 

Hairong Kuang commented on HDFS-627:


Calling finalizeBlock in the last may not work because the replicaInfo in the 
replicasMap is already a finalized one.

> Support replica update in datanode
> --
>
> Key: HDFS-627
> URL: https://issues.apache.org/jira/browse/HDFS-627
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: data-node
>Affects Versions: Append Branch
>Reporter: Tsz Wo (Nicholas), SZE
> Fix For: Append Branch
>
> Attachments: h627_20090917.patch, h627_20090921.patch, 
> h627_20090921b.patch, h627_20090922.patch, h627_20090923.patch, 
> h627_20090923b.patch
>
>
> This is a followed up issue of HDFS-619.  We are going to implement step 4c 
> described in the block recovery algorithm.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.