[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-03-19 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202068#comment-15202068
 ] 

Eric Payne commented on HDFS-9634:
--

[~jojochuang], Thanks for reviewing the functionality of this patch.

Did you use the FS Shell to test this (i.e., {{hadoop fs -cat 
webhdfs://SOMEHOST/MyHome/myfile.txt}})? If so, the FS Shell swallows the stack 
trace and just prints the message when accessing files via webhdfs. It has done 
this for a long time. In my test environment, I reverted this change, and FS 
Shell behaves the same way.

When I write a test java program, read a file via webhdfs, and inject an error, 
I do get the whole stack trace, including the cause message.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.7.3
>
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-03-19 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202188#comment-15202188
 ] 

Wei-Chiu Chuang commented on HDFS-9634:
---

Thanks for the explanation, Eric.

I did made the observation in tests (HDFS-9905). If it's been the case all the 
time, then it's probably a better idea not to change it.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.7.3
>
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-03-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15183227#comment-15183227
 ] 

Wei-Chiu Chuang commented on HDFS-9634:
---

This seems to contain a bug: After the exception is reinterpreted, the original 
stack trace is lost, and it's impossible to tell where the exception occurred.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.7.3
>
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15110800#comment-15110800
 ] 

Hudson commented on HDFS-9634:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9149 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9149/])
HDFS-9634. webhdfs client side exceptions don't provide enough details. 
(kihwal: rev 3616c7b855962014750a3259a64c6e2a147da884)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTimeouts.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java


> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.7.3
>
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-21 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15110779#comment-15110779
 ] 

Kihwal Lee commented on HDFS-9634:
--

Ok, it might be my env. I will commit it and will keep an eye on 2.7 builds. 

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-21 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15110644#comment-15110644
 ] 

Eric Payne commented on HDFS-9634:
--

Sorry, that should have been java 1.7 and 1.8.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-21 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15110643#comment-15110643
 ] 

Eric Payne commented on HDFS-9634:
--

Thanks a lot, [~kihwal] and [~shahrs87]. I have run {{TestWebHdfsTimeouts}} 
several times with both java 2.7 and 2.8, and it always succeeds for me.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15109404#comment-15109404
 ] 

Kihwal Lee commented on HDFS-9634:
--

I used jdk8 and ran {{TestWebHdfsTimeouts}}. {{testReadTimeout}} would fail 
occasionally. But if I run it alone, it passes 100% of times. ({{mvn test 
-Dtest=TestWebHdfsTimeouts#testReadTimeout}}). So I think it is failing due to 
interactions with other test cases and probably only happens with jdk8.

[~eepayne] If you find the cause of the test breakage, please file a jira. I 
don't think that blocks this jira, but will wait for your analysis before 
committing.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-20 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15109364#comment-15109364
 ] 

Rushabh S Shah commented on HDFS-9634:
--

The patch looks good to me.
Ran the  test case on trunk and on branch-2.7 multiple times.
It ran successfully everytime.
+1 (non-binding).

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-20 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15109067#comment-15109067
 ] 

Kihwal Lee commented on HDFS-9634:
--

When I tried it on 2.7, the new test case fails.  It is passing in trunk, of 
course.
{noformat}
TestWebHdfsTimeouts.testReadTimeout:131 expected: but was:
{noformat}


> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-19 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107625#comment-15107625
 ] 

Kihwal Lee commented on HDFS-9634:
--

+1 looks good to me.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-19 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107396#comment-15107396
 ] 

Kuhu Shukla commented on HDFS-9634:
---

+1 (non-binding)
I had questions about why we would use {{getAuthority()}} over {{getHost()}} 
but it does make more sense to have the ports along with the hostname. Thanks 
[~eepayne] for clarifying that.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-12 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15094893#comment-15094893
 ] 

Eric Payne commented on HDFS-9634:
--

All of the unit tests that are listed above passed when I ran them in my local 
buildenvironment.

> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15094595#comment-15094595
 ] 

Hadoop QA commented on HDFS-9634:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-hdfs-project (total was 61, now 62). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 2s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 14s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 191m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.namenode.TestNNThroughpu

[jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details

2016-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15093108#comment-15093108
 ] 

Hadoop QA commented on HDFS-9634:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HDFS-9634 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781698/HDFS-9634.001.patch |
| JIRA Issue | HDFS-9634 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/14096/console |


This message was automatically generated.



> webhdfs client side exceptions don't provide enough details
> ---
>
> Key: HDFS-9634
> URL: https://issues.apache.org/jira/browse/HDFS-9634
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0, 2.8.0, 2.7.1
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: HDFS-9634.001.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there 
> are no details beyond the fact that a timeout occurred. Ideally it should say 
> which node is responsible for the timeout, but failing that it should at 
> least say which node we're talking to so we can examine that node's logs to 
> further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at sun.net.www.MeteredStream.read(MeteredStream.java:134)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
> at 
> org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
> at 
> org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
> at 
> com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
> at java.io.FilterInputStream.read(FilterInputStream.java:107)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
> at 
> com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode 
> was responsible for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)