[ 
https://issues.apache.org/jira/browse/HDFS-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Yuan updated HDFS-5822:
----------------------------

    Description: 
In org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java, there is the 
following code snippet in the run() method:

{noformat}
156:      } catch (OutOfMemoryError ie) {
157:        IOUtils.cleanup(null, peer);
158:        // DataNode can run out of memory if there is too many transfers.
159:       // Log the event, Sleep for 30 seconds, other transfers may complete 
by
160:        // then.
161:        LOG.warn("DataNode is out of memory. Will retry in 30 seconds.", 
ie);
162:        try {
163:          Thread.sleep(30 * 1000);
164:        } catch (InterruptedException e) {
165:          // ignore
166:        }
167:      }
{noformat}

Note that InterruptedException is completely ignored. This might not be safe 
since any potential events that lead to InterruptedException are lost?

More info on why InterruptedException shouldn't be ignored: 
http://stackoverflow.com/questions/1087475/when-does-javas-thread-sleep-throw-interruptedexception

Thanks,
Ding

  was:
In org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java, there is the 
following code snippet in the run() method:

156:      } catch (OutOfMemoryError ie) {
157:        IOUtils.cleanup(null, peer);
158:        // DataNode can run out of memory if there is too many transfers.
159:       // Log the event, Sleep for 30 seconds, other transfers may complete 
by
160:        // then.
161:        LOG.warn("DataNode is out of memory. Will retry in 30 seconds.", 
ie);
162:        try {
163:          Thread.sleep(30 * 1000);
164:        } catch (InterruptedException e) {
165:          // ignore
166:        }
167:      }

Note that InterruptedException is completely ignored. This might not be safe 
since any potential events that lead to InterruptedException are lost?

More info on why InterruptedException shouldn't be ignored: 
http://stackoverflow.com/questions/1087475/when-does-javas-thread-sleep-throw-interruptedexception

Thanks,
Ding


> InterruptedException to thread sleep ignored
> --------------------------------------------
>
>                 Key: HDFS-5822
>                 URL: https://issues.apache.org/jira/browse/HDFS-5822
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.2.0
>            Reporter: Ding Yuan
>
> In org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java, there is 
> the following code snippet in the run() method:
> {noformat}
> 156:      } catch (OutOfMemoryError ie) {
> 157:        IOUtils.cleanup(null, peer);
> 158:        // DataNode can run out of memory if there is too many transfers.
> 159:       // Log the event, Sleep for 30 seconds, other transfers may 
> complete by
> 160:        // then.
> 161:        LOG.warn("DataNode is out of memory. Will retry in 30 seconds.", 
> ie);
> 162:        try {
> 163:          Thread.sleep(30 * 1000);
> 164:        } catch (InterruptedException e) {
> 165:          // ignore
> 166:        }
> 167:      }
> {noformat}
> Note that InterruptedException is completely ignored. This might not be safe 
> since any potential events that lead to InterruptedException are lost?
> More info on why InterruptedException shouldn't be ignored: 
> http://stackoverflow.com/questions/1087475/when-does-javas-thread-sleep-throw-interruptedexception
> Thanks,
> Ding



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to