[ 
https://issues.apache.org/jira/browse/HADOOP-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14509304#comment-14509304
 ] 

Hadoop QA commented on HADOOP-11730:
------------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 32s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 3  line(s) that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   5m 28s | The applied patch generated  1 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 38s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 15s | Tests passed in 
hadoop-aws. |
| | |  40m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12705845/HADOOP-11730-branch-2.6.0.001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 189a63a |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/artifact/patchprocess/whitespace.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6167/console |


This message was automatically generated.

> Regression: s3n read failure recovery broken
> --------------------------------------------
>
>                 Key: HADOOP-11730
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11730
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.5.0, 2.6.0
>         Environment: HDP 2.2
>            Reporter: Takenori Sato
>            Assignee: Takenori Sato
>         Attachments: HADOOP-11730-branch-2.6.0.001.patch
>
>
> s3n attempts to read again when it encounters IOException during read. But 
> the current logic does not reopen the connection, thus, it ends up with 
> no-op, and committing the wrong(truncated) output.
> Here's a stack trace as an example.
> {quote}
> 2015-03-13 20:17:24,835 [TezChild] INFO  
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor - 
> Starting output org.apache.tez.mapreduce.output.MROutput@52008dbd to vertex 
> scope-12
> 2015-03-13 20:17:24,866 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream threw an exception
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 296587138; received: 155648
>       at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:184)
>       at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
>       at 
> org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78)
>       at 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146)
>       at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:145)
>       at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>       at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>       at java.io.DataInputStream.read(DataInputStream.java:100)
>       at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>       at 
> org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>       at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>       at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:185)
>       at org.apache.pig.builtin.PigStorage.getNext(PigStorage.java:259)
>       at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:204)
>       at 
> org.apache.tez.mapreduce.lib.MRReaderMapReduce.next(MRReaderMapReduce.java:116)
>       at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POSimpleTezLoad.getNextTuple(POSimpleTezLoad.java:106)
>       at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>       at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:246)
>       at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>       at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POFilter.getNextTuple(POFilter.java:91)
>       at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
>       at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POStoreTez.getNextTuple(POStoreTez.java:117)
>       at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:313)
>       at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:192)
>       at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:415)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168)
>       at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> 2015-03-13 20:17:24,867 [TezChild] INFO  
> org.apache.hadoop.fs.s3native.NativeS3FileSystem - Received IOException while 
> reading 'user/hadoop/tsato/readlarge/input/cloudian-s3.log.20141119', 
> attempting to reopen.
> 2015-03-13 20:17:24,867 [TezChild] DEBUG 
> org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream - 
> Released HttpMethod as its response data stream is fully consumed
> 2015-03-13 20:17:24,868 [TezChild] INFO  
> org.apache.tez.dag.app.TaskAttemptListenerImpTezDag - Commit go/no-go request 
> from attempt_1426245338920_0001_1_00_000004_0
> 2015-03-13 20:17:24,868 [TezChild] INFO  
> org.apache.tez.dag.app.dag.impl.TaskImpl - 
> attempt_1426245338920_0001_1_00_000004_0 given a go for committing the task 
> output.
> {quote}
> It seems this is a regression, which was introduced by the following 
> optimizations.
> https://issues.apache.org/jira/browse/HADOOP-10589
> https://issues.apache.org/jira/browse/HADOOP-10457
> Also, test cases should be reviewed so that it covers this scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to