[ 
https://issues.apache.org/jira/browse/HDFS-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HDFS-5270:
------------------------------
    Attachment: HDFS-5270.2.patch

patch rebased to current trunk. Attaching for a full qa bot run.

There are a few test failures when I attempt to run through things locally. 

{code}
$ mvn -Dtest=TestBlock*,TestDataNode* package
... SNIP ...
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 146.067 sec <<< 
FAILURE! - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
testWrite(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS)  
Time elapsed: 62.509 sec  <<< ERROR!
java.io.IOException: All datanodes 
DatanodeInfoWithStorage[127.0.0.1:54558,DS-bc806196-a774-4af3-afe7-d6d88c53d15b,DISK]
 are bad. Aborting...
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1224)
        at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1016)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:403)

testAppend(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS) 
 Time elapsed: 62.484 sec  <<< ERROR!
java.io.IOException: All datanodes 
DatanodeInfoWithStorage[127.0.0.1:54662,DS-116db650-c3c9-4dee-9c3d-4343f12888d8,DISK]
 are bad. Aborting...
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1224)
        at 
org.apache.hadoop.hdfs.DataStreamer.processDatanodeError(DataStreamer.java:1016)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:403)

... SNIP ...

Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.564 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
testRaceBetweenReplicaRecoveryAndFinalizeBlock(org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery)
  Time elapsed: 5.98 sec  <<< FAILURE!
java.lang.AssertionError: Recovery should be initiated successfully
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.assertTrue(Assert.java:41)
        at 
org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery.testRaceBetweenReplicaRecoveryAndFinalizeBlock(TestBlockRecovery.java:638)

...SNIP...

{code}

[~wheat9], are you still interested in this ticket? Presuming the above come 
back as a problem on jenkins, could you take a look at these failures?

> Use thread pools in the datenode daemons
> ----------------------------------------
>
>                 Key: HDFS-5270
>                 URL: https://issues.apache.org/jira/browse/HDFS-5270
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: Haohui Mai
>            Assignee: Haohui Mai
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-5270.000.patch, HDFS-5270.2.patch, 
> TestConcurrentAccess.java
>
>
> The current implementation of the datanode creates a thread when a new 
> request comes in. This incurs high overheads for the creation / destruction 
> of threads, making the datanode unstable under high concurrent loads.
> This JIRA proposes to use a thread pool to reduce the overheads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to