[ http://issues.apache.org/jira/browse/HADOOP-381?page=all ]
Owen O'Malley updated HADOOP-381:
-
Attachment: keep-task-file-pattern.patch
1. adds the keep.failed.task.files and keep.task.files.pattern variables to
hadoop-default.xml
2. adds set/getKeepTa
keeping files for tasks that match regex on task id
---
Key: HADOOP-381
URL: http://issues.apache.org/jira/browse/HADOOP-381
Project: Hadoop
Issue Type: New Feature
Components: mapred
[ http://issues.apache.org/jira/browse/HADOOP-380?page=all ]
Mahadev konar updated HADOOP-380:
-
Attachment: polling.patch
Just a one line change that fixes the lastpolltime.
> The reduce tasks poll for mapoutputs in a loop
>
The reduce tasks poll for mapoutputs in a loop
--
Key: HADOOP-380
URL: http://issues.apache.org/jira/browse/HADOOP-380
Project: Hadoop
Issue Type: Bug
Components: mapred
Repor
[
http://issues.apache.org/jira/browse/HADOOP-362?page=comments#action_12422786 ]
Owen O'Malley commented on HADOOP-362:
--
There is a another aspect/cause of this that we just observed. What we saw was
a large job with 5 maps that were not r
[ http://issues.apache.org/jira/browse/HADOOP-362?page=all ]
Owen O'Malley reassigned HADOOP-362:
Assignee: Owen O'Malley (was: Devaraj Das)
> tasks can get lost when reporting task completion to the JobTracker has an
> error
>
provide progress feedback while the reducer is sorting
--
Key: HADOOP-379
URL: http://issues.apache.org/jira/browse/HADOOP-379
Project: Hadoop
Issue Type: Improvement
Reporter:
[ http://issues.apache.org/jira/browse/HADOOP-260?page=all ]
Milind Bhandarkar updated HADOOP-260:
-
Attachment: hadoop-config.patch
Added a [--config confdir] optional parameter to all hadoop shell scripts. The
scripts now use the portable source co
[ http://issues.apache.org/jira/browse/HADOOP-260?page=all ]
Milind Bhandarkar updated HADOOP-260:
-
Attachment: (was: config.patch)
> the start up scripts should take a command line parameter --config making it
> easy to run multiple hadoop inst
[
http://issues.apache.org/jira/browse/HADOOP-344?page=comments#action_12422758 ]
Konstantin Shvachko commented on HADOOP-344:
This would also work. But there are 2 other places that already pass canonical
paths.
So we will have to c
too many files are distributed to slaves nodes
--
Key: HADOOP-378
URL: http://issues.apache.org/jira/browse/HADOOP-378
Project: Hadoop
Issue Type: Bug
Components: conf
Reporte
[
http://issues.apache.org/jira/browse/HADOOP-375?page=comments#action_12422739 ]
Yoram Arnon commented on HADOOP-375:
it would be good to have a unit test for multiple datanodes in a machine
> Introduce a way for datanodes to register their
[ http://issues.apache.org/jira/browse/HADOOP-377?page=all ]
Jean-Baptiste Quenot updated HADOOP-377:
Attachment: 20060721-hadoop-Configuration-URL
Patch against hadoop 0.4.0
> Configuration does not handle
: 20060721-hadoop-Configuration-URL
Current Configuration allows:
* pointing to a resource in the classpath
* local path on the file system
The attached patch handles java.net.URL. We use it to load hadoop-client.xml
from a JAR.
Thanks in advance!
--
This message is automatically generated by JIRA
[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422724 ]
Owen O'Malley commented on HADOOP-54:
-
-1 on adding flush to the public api.
I just checked and the only users of SequenceFile.Writer.append(byte[], ...) in
bo
[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422719 ]
eric baldeschwieler commented on HADOOP-54:
---
Arun, what does flush do exactly? Does it create a block boundary? I'd vote
for not expanding the interface
[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422716 ]
eric baldeschwieler commented on HADOOP-54:
---
Sounds like we better be careful here.
This raw interface is presumably used mainly by the framework? So w
[
http://issues.apache.org/jira/browse/HADOOP-362?page=comments#action_12422701 ]
Devaraj Das commented on HADOOP-362:
Discovered a minor problem with this patch which caused the status updates to
never happen on the job status web page. The
[ http://issues.apache.org/jira/browse/HADOOP-312?page=all ]
Devaraj Das reassigned HADOOP-312:
--
Assignee: Devaraj Das
> Connections should not be cached
>
>
> Key: HADOOP-312
> URL: http:
[ http://issues.apache.org/jira/browse/HADOOP-376?page=all ]
Owen O'Malley updated HADOOP-376:
-
Attachment: datanode-http-port-scan.patch
This patch enables the port scan.
> Datanode does not scan for an open http port
>
Datanode does not scan for an open http port
Key: HADOOP-376
URL: http://issues.apache.org/jira/browse/HADOOP-376
Project: Hadoop
Issue Type: Bug
Components: dfs
Affects Versions: 0.
[
http://issues.apache.org/jira/browse/HADOOP-375?page=comments#action_12422677 ]
Devaraj Das commented on HADOOP-375:
The way I am thinking of doing this is to add a new field in
DatanodeRegistration class called infoPort. The infoPort is s
[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422673 ]
Owen O'Malley commented on HADOOP-54:
-
One more thing, the "append" isn't appending bytes, but a preserialized
key/value pair. The interface is a little unfortu
[
http://issues.apache.org/jira/browse/HADOOP-372?page=comments#action_12422668 ]
Runping Qi commented on HADOOP-372:
---
Doug,
I did respond your suggestion in my previous comment (it is in the middle of
the text, thus may be easy to overlook:)
[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422665 ]
Owen O'Malley commented on HADOOP-54:
-
The "raw" append/next interface for SequenceFile is intended to get the raw
bytes from the file. Its intended use was for
[
http://issues.apache.org/jira/browse/HADOOP-54?page=comments#action_12422661 ]
Arun C Murthy commented on HADOOP-54:
-
Issues which I came across while implementing the above proposal...
1. The implementation of the public interface
Sequ
Sorry, it seems i posted the thread in wrong mail list, and now i correct it.
-- Forwarded message --
From: Jack Tang <[EMAIL PROTECTED]>
Date: Jul 21, 2006 5:24 PM
Subject: Distributed Matrix Computering on Hadoop
To: nutch-dev@lucene.apache.org
Hi list,
I am now facing one pr
Introduce a way for datanodes to register their HTTP info ports with the
NameNode
-
Key: HADOOP-375
URL: http://issues.apache.org/jira/browse/HADOOP-375
Project: Hadoop
[
http://issues.apache.org/jira/browse/HADOOP-347?page=comments#action_12422608 ]
Devaraj Das commented on HADOOP-347:
Yeah this problem is there. Thanks for pointing it out. I will create a
separate issue to handle multiple datanodes in a s
[
http://issues.apache.org/jira/browse/HADOOP-347?page=comments#action_12422603 ]
Johan Oskarson commented on HADOOP-347:
---
This patch is causing problems for me, if a computer have a second dfs data dir
in the config it doesn't start prope
humm...
but this needs to be addressed for speculative execution anyway. So
this argument doesn't apply to well designed code.
This doesn't prove we do need to make the change...
On Jul 20, 2006, at 3:22 PM, Konstantin Shvachko wrote:
The problem with increasing the lease period is that in
[
http://issues.apache.org/jira/browse/HADOOP-344?page=comments#action_12422571 ]
Doug Cutting commented on HADOOP-344:
-
Wouldn't it be better to change DF to do 'new File(path).getCanonicalPath()',
rather than change the caller?
> TaskTrac
[
http://issues.apache.org/jira/browse/HADOOP-372?page=comments#action_12422565 ]
Doug Cutting commented on HADOOP-372:
-
> My thought is to add a Map object to the JobConf class [ ...]
JobConf, underneath, is a Properties instance, mapping S
33 matches
Mail list logo