starting one data node thread to manage multiple data directories
-
Key: HADOOP-257
URL: http://issues.apache.org/jira/browse/HADOOP-257
Project: Hadoop
Type: Improvement
Components: dfs
Reporte
[ http://issues.apache.org/jira/browse/HADOOP-163?page=all ]
Doug Cutting resolved HADOOP-163:
-
Resolution: Fixed
This looks great! I just committed it. Thanks, Hairong!
> If a DFS datanode cannot write onto its file system. it should tell the na
[ http://issues.apache.org/jira/browse/HADOOP-163?page=all ]
Hairong Kuang updated HADOOP-163:
-
Attachment: disk.patch
In this patch, if a data node finds that its data directory becomes not
readable or writable, it logs the error and reports the proble
[
http://issues.apache.org/jira/browse/HADOOP-256?page=comments#action_12413540 ]
Doug Cutting commented on HADOOP-256:
-
This looks good, but the build still needs some work.
I tried to run 'make' in the source directory, but had to first add $(LDFLAGS)
[
http://issues.apache.org/jira/browse/HADOOP-256?page=comments#action_12413538 ]
Arun C Murthy commented on HADOOP-256:
--
I have attached libhdfs.patch...
It creates a new sub-dir: hadoop/src/c++/libhdfs and also a port of
hadoop/src/test/org/apache/h
[ http://issues.apache.org/jira/browse/HADOOP-256?page=all ]
updated HADOOP-256:
Attachment: libhdfs.patch
> Implement a C api for hadoop dfs
>
>
> Key: HADOOP-256
> URL: http://issues.apache.org/jira/browse/HADOOP
[
http://issues.apache.org/jira/browse/HADOOP-90?page=comments#action_12413534 ]
Yoram Arnon commented on HADOOP-90:
---
I've done the same, alternating between two backup nodes.
it's band-aid, until a real solution is devised.
> DFS is succeptible to data l
[
http://issues.apache.org/jira/browse/HADOOP-90?page=comments#action_12413533 ]
Doug Cutting commented on HADOOP-90:
The low-tech thing I've done that's saved me when the namenode dies is simply
to have a cron entry that rsyncs the namenode's files to
[
http://issues.apache.org/jira/browse/HADOOP-90?page=comments#action_12413532 ]
Yoram Arnon commented on HADOOP-90:
---
seems like all the extra copies will be on the same node, right?
so if it dies, so does the filesystem...
perhaps the bug should be cloned
Implement a C api for hadoop dfs
Key: HADOOP-256
URL: http://issues.apache.org/jira/browse/HADOOP-256
Project: Hadoop
Type: New Feature
Components: dfs
Reporter: Arun C Murthy
Implement a C api for hadoop dfs to ease talkin
[ http://issues.apache.org/jira/browse/HADOOP-254?page=all ]
Doug Cutting resolved HADOOP-254:
-
Resolution: Fixed
I just committed this. You rock, Owen.
> use http to shuffle data between the maps and the reduces
> -
[ http://issues.apache.org/jira/browse/HADOOP-254?page=all ]
Owen O'Malley updated HADOOP-254:
-
Attachment: http-shuffle-2.patch
Ok, here is an updated patch that addresses Doug's concerns.
1. Local file system is now used for reading and writing the ma
[ http://issues.apache.org/jira/browse/HADOOP-108?page=all ]
Sameer Paranjpye resolved HADOOP-108:
-
Resolution: Duplicate
Duplicate of HADOOP-128
> EOFException in DataNode$DataXceiver.run
>
>
>
[ http://issues.apache.org/jira/browse/HADOOP-108?page=all ]
Sameer Paranjpye updated HADOOP-108:
Fix Version: 0.2
Version: 0.1.1
> EOFException in DataNode$DataXceiver.run
>
>
> Key: HADOOP-1
[
http://issues.apache.org/jira/browse/HADOOP-255?page=comments#action_12413505 ]
paul sutter commented on HADOOP-255:
Doug,
We realize all of this will change with all the great copy/sort-path work being
done at Yahoo, but here's what we're seeing: W
[
http://issues.apache.org/jira/browse/HADOOP-255?page=comments#action_12413498 ]
Naveen Nalam commented on HADOOP-255:
-
Well the problem I was seeing is that a getFile RPC request for say 1GB was
issued, but then the Call object timedout on the client.
[
http://issues.apache.org/jira/browse/HADOOP-255?page=comments#action_12413490 ]
Doug Cutting commented on HADOOP-255:
-
I think this is, in general, something that we won't fix. It might be possible
to improve things, but we cannot, without elaborate
17 matches
Mail list logo