[
https://issues.apache.org/jira/browse/HADOOP-442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Doug Cutting updated HADOOP-442:
--------------------------------
Resolution: Fixed
Fix Version/s: 0.12.0
Status: Resolved (was: Patch Available)
I just committed this. Thanks, Wendy!
> slaves file should include an 'exclude' section, to prevent "bad" datanodes
> and tasktrackers from disrupting a cluster
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-442
> URL: https://issues.apache.org/jira/browse/HADOOP-442
> Project: Hadoop
> Issue Type: Bug
> Components: conf
> Reporter: Yoram Arnon
> Assigned To: Wendy Chien
> Fix For: 0.12.0
>
> Attachments: hadoop-442-10.patch, hadoop-442-11.patch,
> hadoop-442-8.patch
>
>
> I recently had a few nodes go bad, such that they were inaccessible to ssh,
> but were still running their java processes.
> tasks that executed on them were failing, causing jobs to fail.
> I couldn't stop the java processes, because of the ssh issue, so I was
> helpless until I could actually power down these nodes.
> restarting the cluster doesn't help, even when removing the bad nodes from
> the slaves file - they just reconnect and are accepted.
> while we plan to avoid tasks from launching on the same nodes over and over,
> what I'd like is to be able to prevent rogue processes from connecting to the
> masters.
> Ideally, the slaves file will contain an 'exclude' section, which will list
> nodes that shouldn't be accessed, and should be ignored if they try to
> connect. That would also help in configuring the slaves file for a large
> cluster - I'd list the full range of machines in the cluster, then list the
> ones that are down in the 'exclude' section
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.