[jira] [Commented] (HDFS-15408) Failed execution caused by SocketTimeoutException

2020-06-11 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133284#comment-17133284
 ] 

Stephen O'Donnell commented on HDFS-15408:
--

This started happening after HDFS-2538 from 3.0.0. The reason, is that the fsck 
no longer shows progress (prints a dot per file) and if the cluster is large, 
the timeout can happen. At Cloudera we have seen a lot of customers frustrated 
by this since they moved to a 3.x branch version.

There were logs of ideas to fix this on HDFS-7175, but in the end I just turned 
the dots back on, but printed less of them. 

You can work around this issue by using the -showprogress switch when running 
fsck.

> Failed execution caused by SocketTimeoutException
> -
>
> Key: HDFS-15408
> URL: https://issues.apache.org/jira/browse/HDFS-15408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: echohlne
>Priority: Major
>
> When I execute command: hdfs fsck / 
>  in the hadoop cluster to check the health of the cluster, It always report 
> an error execution failure like below:
> {code}
> Connecting to namenode via http://hadoop20:50070/fsck?ugi=hdfs=%2F
> Exception in thread "main" java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:359)
>   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:159)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:155)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:402)
> {code}
> We try to solve this problem by adding a new parameter: 
> {color:#de350b}*dfs.fsck.http.timeout.ms*{color} to control the 
> connectionTimeout and the readTimeout if the HttpConnection in DFSck.java 
> .Please check is it the right way to solve the problem? thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15408) Failed execution caused by SocketTimeoutException

2020-06-11 Thread echohlne (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133281#comment-17133281
 ] 

echohlne commented on HDFS-15408:
-

thanks for your advice, i must admit that you are right, maybe i should find 
the clause first rather than just make it configurable. 

> Failed execution caused by SocketTimeoutException
> -
>
> Key: HDFS-15408
> URL: https://issues.apache.org/jira/browse/HDFS-15408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: echohlne
>Priority: Major
>
> When I execute command: hdfs fsck / 
>  in the hadoop cluster to check the health of the cluster, It always report 
> an error execution failure like below:
> {code}
> Connecting to namenode via http://hadoop20:50070/fsck?ugi=hdfs=%2F
> Exception in thread "main" java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:359)
>   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:159)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:155)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:402)
> {code}
> We try to solve this problem by adding a new parameter: 
> {color:#de350b}*dfs.fsck.http.timeout.ms*{color} to control the 
> connectionTimeout and the readTimeout if the HttpConnection in DFSck.java 
> .Please check is it the right way to solve the problem? thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15408) Failed execution caused by SocketTimeoutException

2020-06-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133268#comment-17133268
 ] 

Ayush Saxena commented on HDFS-15408:
-

Huge in terms of Datanodes? Why will number of datanodes impacting FSCK? 
Well if it is number of datanodes, We have tried FSCK with huge loads on about 
more than ~10K datanodes and to my knowledge we never faced this issue. 
Did you observe this in a production cluster or a test cluster, 1 Min seems 
quite big enough. But I suspect this could be some cluster issue. Not against 
making it configurable, but...


> Failed execution caused by SocketTimeoutException
> -
>
> Key: HDFS-15408
> URL: https://issues.apache.org/jira/browse/HDFS-15408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: echohlne
>Priority: Major
>
> When I execute command: hdfs fsck / 
>  in the hadoop cluster to check the health of the cluster, It always report 
> an error execution failure like below:
> {code}
> Connecting to namenode via http://hadoop20:50070/fsck?ugi=hdfs=%2F
> Exception in thread "main" java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:359)
>   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:159)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:155)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:402)
> {code}
> We try to solve this problem by adding a new parameter: 
> {color:#de350b}*dfs.fsck.http.timeout.ms*{color} to control the 
> connectionTimeout and the readTimeout if the HttpConnection in DFSck.java 
> .Please check is it the right way to solve the problem? thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15408) Failed execution caused by SocketTimeoutException

2020-06-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133252#comment-17133252
 ] 

Hadoop QA commented on HDFS-15408:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
52s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 477 unchanged - 0 fixed = 480 total (was 477) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m 
27s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29421/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15408 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005483/HDFS15408.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 033d7ec8b8cf 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 93b121a9717 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| mvninstall | 

[jira] [Commented] (HDFS-15408) Failed execution caused by SocketTimeoutException

2020-06-11 Thread echohlne (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133232#comment-17133232
 ] 

echohlne commented on HDFS-15408:
-

[~hemanthboyina] 
thanks for your frendly reply. 
In Most cases, 1minute is verty helpful, but Our cluster is large and 
growing.The Socket timeout in the DFSck#URLConnectionFactory is fixed to 1 
minute, which can no longer be changed. So I try to add a new parameter to 
control the socket timeout.
if the parameter is not configured, the default 1 minute is still used.

> Failed execution caused by SocketTimeoutException
> -
>
> Key: HDFS-15408
> URL: https://issues.apache.org/jira/browse/HDFS-15408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: echohlne
>Priority: Major
>
> When I execute command: hdfs fsck / 
>  in the hadoop cluster to check the health of the cluster, It always report 
> an error execution failure like below:
> {code}
> Connecting to namenode via http://hadoop20:50070/fsck?ugi=hdfs=%2F
> Exception in thread "main" java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:359)
>   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:159)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:155)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:402)
> {code}
> We try to solve this problem by adding a new parameter: 
> {color:#de350b}*dfs.fsck.http.timeout.ms*{color} to control the 
> connectionTimeout and the readTimeout if the HttpConnection in DFSck.java 
> .Please check is it the right way to solve the problem? thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15408) Failed execution caused by SocketTimeoutException

2020-06-11 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133231#comment-17133231
 ] 

Hadoop QA commented on HDFS-15408:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m  
8s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
6s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 477 unchanged - 0 fixed = 480 total (was 477) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m 
13s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29420/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15408 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13005475/HDFS15408.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux cd25a2ca79da 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 93b121a9717 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| mvninstall | 

[jira] [Commented] (HDFS-15408) Failed execution caused by SocketTimeoutException

2020-06-11 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133225#comment-17133225
 ] 

hemanthboyina commented on HDFS-15408:
--

thanks for the filing the issue [~echohlne]

at present we have default socket timeout as 1min , do you think 1min is not 
sufficient to determine the http connection timeout ?

> Failed execution caused by SocketTimeoutException
> -
>
> Key: HDFS-15408
> URL: https://issues.apache.org/jira/browse/HDFS-15408
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: echohlne
>Priority: Major
>
> When I execute command: hdfs fsck / 
>  in the hadoop cluster to check the health of the cluster, It always report 
> an error execution failure like below:
> {code}
> Connecting to namenode via http://hadoop20:50070/fsck?ugi=hdfs=%2F
> Exception in thread "main" java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
>   at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:359)
>   at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:159)
>   at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:156)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:155)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:402)
> {code}
> We try to solve this problem by adding a new parameter: 
> {color:#de350b}*dfs.fsck.http.timeout.ms*{color} to control the 
> connectionTimeout and the readTimeout if the HttpConnection in DFSck.java 
> .Please check is it the right way to solve the problem? thanks a lot!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org