[ https://issues.apache.org/jira/browse/HDFS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jonathan Hsieh updated HDFS-1787: --------------------------------- Attachment: hdfs-1787.patch This patch updates the max transfers/xceivers message so that it gets propagated to the dfs client. I was able to write a reasonable test for the write side, but the read side requires a change to hadoop common. FSDataOuptutStream for a the write side has a getWrappedStream method, but the FSDataInputStream class for the read side does not have or expose this. > "Not enough xcievers" error should propagate to client > ------------------------------------------------------ > > Key: HDFS-1787 > URL: https://issues.apache.org/jira/browse/HDFS-1787 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Todd Lipcon > Assignee: Jonathan Hsieh > Labels: newbie > Attachments: hdfs-1787.patch > > > We find that users often run into the default transceiver limits in the DN. > Putting aside the inherent issues with xceiver threads, it would be nice if > the "xceiver limit exceeded" error propagated to the client. Currently, > clients simply see an EOFException which is hard to interpret, and have to go > slogging through DN logs to find the underlying issue. > The data transfer protocol should be extended to either have a special error > code for "not enough xceivers" or should have some error code for generic > errors with which a string can be attached and propagated. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira