[jira] Created: (HADOOP-257) starting one data node thread to manage multiple data directories

2006-05-26 Thread Hairong Kuang (JIRA)
starting one data node thread to manage multiple data directories - Key: HADOOP-257 URL: http://issues.apache.org/jira/browse/HADOOP-257 Project: Hadoop Type: Improvement Components: dfs Reporte

[jira] Resolved: (HADOOP-163) If a DFS datanode cannot write onto its file system. it should tell the name node not to assign new blocks to it.

2006-05-26 Thread Doug Cutting (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-163?page=all ] Doug Cutting resolved HADOOP-163: - Resolution: Fixed This looks great! I just committed it. Thanks, Hairong! > If a DFS datanode cannot write onto its file system. it should tell the na

[jira] Updated: (HADOOP-163) If a DFS datanode cannot write onto its file system. it should tell the name node not to assign new blocks to it.

2006-05-26 Thread Hairong Kuang (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-163?page=all ] Hairong Kuang updated HADOOP-163: - Attachment: disk.patch In this patch, if a data node finds that its data directory becomes not readable or writable, it logs the error and reports the proble

[jira] Commented: (HADOOP-256) Implement a C api for hadoop dfs

2006-05-26 Thread Doug Cutting (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-256?page=comments#action_12413540 ] Doug Cutting commented on HADOOP-256: - This looks good, but the build still needs some work. I tried to run 'make' in the source directory, but had to first add $(LDFLAGS)

[jira] Commented: (HADOOP-256) Implement a C api for hadoop dfs

2006-05-26 Thread Arun C Murthy (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-256?page=comments#action_12413538 ] Arun C Murthy commented on HADOOP-256: -- I have attached libhdfs.patch... It creates a new sub-dir: hadoop/src/c++/libhdfs and also a port of hadoop/src/test/org/apache/h

[jira] Updated: (HADOOP-256) Implement a C api for hadoop dfs

2006-05-26 Thread Anonymous (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-256?page=all ] updated HADOOP-256: Attachment: libhdfs.patch > Implement a C api for hadoop dfs > > > Key: HADOOP-256 > URL: http://issues.apache.org/jira/browse/HADOOP

[jira] Commented: (HADOOP-90) DFS is succeptible to data loss in case of name node failure

2006-05-26 Thread Yoram Arnon (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-90?page=comments#action_12413534 ] Yoram Arnon commented on HADOOP-90: --- I've done the same, alternating between two backup nodes. it's band-aid, until a real solution is devised. > DFS is succeptible to data l

[jira] Commented: (HADOOP-90) DFS is succeptible to data loss in case of name node failure

2006-05-26 Thread Doug Cutting (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-90?page=comments#action_12413533 ] Doug Cutting commented on HADOOP-90: The low-tech thing I've done that's saved me when the namenode dies is simply to have a cron entry that rsyncs the namenode's files to

[jira] Commented: (HADOOP-90) DFS is succeptible to data loss in case of name node failure

2006-05-26 Thread Yoram Arnon (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-90?page=comments#action_12413532 ] Yoram Arnon commented on HADOOP-90: --- seems like all the extra copies will be on the same node, right? so if it dies, so does the filesystem... perhaps the bug should be cloned

[jira] Created: (HADOOP-256) Implement a C api for hadoop dfs

2006-05-26 Thread Arun C Murthy (JIRA)
Implement a C api for hadoop dfs Key: HADOOP-256 URL: http://issues.apache.org/jira/browse/HADOOP-256 Project: Hadoop Type: New Feature Components: dfs Reporter: Arun C Murthy Implement a C api for hadoop dfs to ease talkin

[jira] Resolved: (HADOOP-254) use http to shuffle data between the maps and the reduces

2006-05-26 Thread Doug Cutting (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-254?page=all ] Doug Cutting resolved HADOOP-254: - Resolution: Fixed I just committed this. You rock, Owen. > use http to shuffle data between the maps and the reduces > -

[jira] Updated: (HADOOP-254) use http to shuffle data between the maps and the reduces

2006-05-26 Thread Owen O'Malley (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-254?page=all ] Owen O'Malley updated HADOOP-254: - Attachment: http-shuffle-2.patch Ok, here is an updated patch that addresses Doug's concerns. 1. Local file system is now used for reading and writing the ma

[jira] Resolved: (HADOOP-108) EOFException in DataNode$DataXceiver.run

2006-05-26 Thread Sameer Paranjpye (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-108?page=all ] Sameer Paranjpye resolved HADOOP-108: - Resolution: Duplicate Duplicate of HADOOP-128 > EOFException in DataNode$DataXceiver.run > > >

[jira] Updated: (HADOOP-108) EOFException in DataNode$DataXceiver.run

2006-05-26 Thread Sameer Paranjpye (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-108?page=all ] Sameer Paranjpye updated HADOOP-108: Fix Version: 0.2 Version: 0.1.1 > EOFException in DataNode$DataXceiver.run > > > Key: HADOOP-1

[jira] Commented: (HADOOP-255) Client Calls are not cancelled after a call timeout

2006-05-26 Thread paul sutter (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-255?page=comments#action_12413505 ] paul sutter commented on HADOOP-255: Doug, We realize all of this will change with all the great copy/sort-path work being done at Yahoo, but here's what we're seeing: W

[jira] Commented: (HADOOP-255) Client Calls are not cancelled after a call timeout

2006-05-26 Thread Naveen Nalam (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-255?page=comments#action_12413498 ] Naveen Nalam commented on HADOOP-255: - Well the problem I was seeing is that a getFile RPC request for say 1GB was issued, but then the Call object timedout on the client.

[jira] Commented: (HADOOP-255) Client Calls are not cancelled after a call timeout

2006-05-26 Thread Doug Cutting (JIRA)
[ http://issues.apache.org/jira/browse/HADOOP-255?page=comments#action_12413490 ] Doug Cutting commented on HADOOP-255: - I think this is, in general, something that we won't fix. It might be possible to improve things, but we cannot, without elaborate